2025-09-19 06:18:34.021717 | Job console starting 2025-09-19 06:18:34.042160 | Updating git repos 2025-09-19 06:18:34.092418 | Cloning repos into workspace 2025-09-19 06:18:34.280170 | Restoring repo states 2025-09-19 06:18:34.306324 | Merging changes 2025-09-19 06:18:34.306350 | Checking out repos 2025-09-19 06:18:34.610014 | Preparing playbooks 2025-09-19 06:18:35.186802 | Running Ansible setup 2025-09-19 06:18:39.402468 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-19 06:18:40.231688 | 2025-09-19 06:18:40.231855 | PLAY [Base pre] 2025-09-19 06:18:40.263351 | 2025-09-19 06:18:40.263568 | TASK [Setup log path fact] 2025-09-19 06:18:40.286136 | orchestrator | ok 2025-09-19 06:18:40.305165 | 2025-09-19 06:18:40.305308 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 06:18:40.346825 | orchestrator | ok 2025-09-19 06:18:40.363892 | 2025-09-19 06:18:40.364016 | TASK [emit-job-header : Print job information] 2025-09-19 06:18:40.413653 | # Job Information 2025-09-19 06:18:40.413955 | Ansible Version: 2.16.14 2025-09-19 06:18:40.414137 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-09-19 06:18:40.414239 | Pipeline: post 2025-09-19 06:18:40.414434 | Executor: 521e9411259a 2025-09-19 06:18:40.414509 | Triggered by: https://github.com/osism/testbed/commit/24e3d22d2253faadc72bec5801e865adde279d36 2025-09-19 06:18:40.414535 | Event ID: 752408c0-9520-11f0-9ea1-fcce68dbf547 2025-09-19 06:18:40.424810 | 2025-09-19 06:18:40.424979 | LOOP [emit-job-header : Print node information] 2025-09-19 06:18:40.599780 | orchestrator | ok: 2025-09-19 06:18:40.600132 | orchestrator | # Node Information 2025-09-19 06:18:40.600194 | orchestrator | Inventory Hostname: orchestrator 2025-09-19 06:18:40.600236 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-19 06:18:40.600271 | orchestrator | Username: zuul-testbed06 2025-09-19 06:18:40.600306 | orchestrator | Distro: Debian 12.12 2025-09-19 06:18:40.600346 | orchestrator | Provider: static-testbed 2025-09-19 06:18:40.600380 | orchestrator | Region: 2025-09-19 06:18:40.600414 | orchestrator | Label: testbed-orchestrator 2025-09-19 06:18:40.600445 | orchestrator | Product Name: OpenStack Nova 2025-09-19 06:18:40.600476 | orchestrator | Interface IP: 81.163.193.140 2025-09-19 06:18:40.627695 | 2025-09-19 06:18:40.627844 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-19 06:18:41.122549 | orchestrator -> localhost | changed 2025-09-19 06:18:41.131311 | 2025-09-19 06:18:41.131432 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-19 06:18:42.303368 | orchestrator -> localhost | changed 2025-09-19 06:18:42.317516 | 2025-09-19 06:18:42.317678 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-19 06:18:42.608176 | orchestrator -> localhost | ok 2025-09-19 06:18:42.618492 | 2025-09-19 06:18:42.619026 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-19 06:18:42.662452 | orchestrator | ok 2025-09-19 06:18:42.702134 | orchestrator | included: /var/lib/zuul/builds/f4c728bda45d4a6b95911456e6e30ad1/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-19 06:18:42.713092 | 2025-09-19 06:18:42.713239 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-19 06:18:44.478235 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-19 06:18:44.478521 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/f4c728bda45d4a6b95911456e6e30ad1/work/f4c728bda45d4a6b95911456e6e30ad1_id_rsa 2025-09-19 06:18:44.478566 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/f4c728bda45d4a6b95911456e6e30ad1/work/f4c728bda45d4a6b95911456e6e30ad1_id_rsa.pub 2025-09-19 06:18:44.478593 | orchestrator -> localhost | The key fingerprint is: 2025-09-19 06:18:44.478638 | orchestrator -> localhost | SHA256:bbh68/cG39uB0Tx44J4DXcemxlElZStMLuHlgZICO+k zuul-build-sshkey 2025-09-19 06:18:44.478664 | orchestrator -> localhost | The key's randomart image is: 2025-09-19 06:18:44.478697 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-19 06:18:44.478719 | orchestrator -> localhost | | .. ...+..*| 2025-09-19 06:18:44.478743 | orchestrator -> localhost | | o. o..B .=.| 2025-09-19 06:18:44.478765 | orchestrator -> localhost | | + . .o *.o+| 2025-09-19 06:18:44.478787 | orchestrator -> localhost | | . . o +.B+.| 2025-09-19 06:18:44.478809 | orchestrator -> localhost | | E S o. =+= | 2025-09-19 06:18:44.478862 | orchestrator -> localhost | | o +.= .| 2025-09-19 06:18:44.478889 | orchestrator -> localhost | | . B o | 2025-09-19 06:18:44.478912 | orchestrator -> localhost | | .o . + +| 2025-09-19 06:18:44.478936 | orchestrator -> localhost | | .. o.. o..o| 2025-09-19 06:18:44.478959 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-19 06:18:44.479027 | orchestrator -> localhost | ok: Runtime: 0:00:01.224280 2025-09-19 06:18:44.497871 | 2025-09-19 06:18:44.498005 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-19 06:18:44.535252 | orchestrator | ok 2025-09-19 06:18:44.552568 | orchestrator | included: /var/lib/zuul/builds/f4c728bda45d4a6b95911456e6e30ad1/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-19 06:18:44.564459 | 2025-09-19 06:18:44.564655 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-19 06:18:44.598949 | orchestrator | skipping: Conditional result was False 2025-09-19 06:18:44.613430 | 2025-09-19 06:18:44.613548 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-19 06:18:45.541858 | orchestrator | changed 2025-09-19 06:18:45.549684 | 2025-09-19 06:18:45.549809 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-19 06:18:45.880701 | orchestrator | ok 2025-09-19 06:18:45.904024 | 2025-09-19 06:18:45.904168 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-19 06:18:46.418072 | orchestrator | ok 2025-09-19 06:18:46.445268 | 2025-09-19 06:18:46.445418 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-19 06:18:46.927142 | orchestrator | ok 2025-09-19 06:18:46.945658 | 2025-09-19 06:18:46.945798 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-19 06:18:47.027027 | orchestrator | skipping: Conditional result was False 2025-09-19 06:18:47.039159 | 2025-09-19 06:18:47.039297 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-19 06:18:48.817445 | orchestrator -> localhost | changed 2025-09-19 06:18:48.830595 | 2025-09-19 06:18:48.830712 | TASK [add-build-sshkey : Add back temp key] 2025-09-19 06:18:49.916263 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/f4c728bda45d4a6b95911456e6e30ad1/work/f4c728bda45d4a6b95911456e6e30ad1_id_rsa (zuul-build-sshkey) 2025-09-19 06:18:49.916481 | orchestrator -> localhost | ok: Runtime: 0:00:00.026988 2025-09-19 06:18:49.923542 | 2025-09-19 06:18:49.923645 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-19 06:18:50.463874 | orchestrator | ok 2025-09-19 06:18:50.469757 | 2025-09-19 06:18:50.469848 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-19 06:18:50.500816 | orchestrator | skipping: Conditional result was False 2025-09-19 06:18:50.538418 | 2025-09-19 06:18:50.538515 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-19 06:18:50.944832 | orchestrator | ok 2025-09-19 06:18:50.964291 | 2025-09-19 06:18:50.964408 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-19 06:18:51.022202 | orchestrator | ok 2025-09-19 06:18:51.039783 | 2025-09-19 06:18:51.039892 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-19 06:18:51.943516 | orchestrator -> localhost | ok 2025-09-19 06:18:51.950596 | 2025-09-19 06:18:51.950723 | TASK [validate-host : Collect information about the host] 2025-09-19 06:18:53.462381 | orchestrator | ok 2025-09-19 06:18:53.486113 | 2025-09-19 06:18:53.486217 | TASK [validate-host : Sanitize hostname] 2025-09-19 06:18:53.578154 | orchestrator | ok 2025-09-19 06:18:53.583392 | 2025-09-19 06:18:53.583479 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-19 06:18:54.759557 | orchestrator -> localhost | changed 2025-09-19 06:18:54.764550 | 2025-09-19 06:18:54.764643 | TASK [validate-host : Collect information about zuul worker] 2025-09-19 06:18:55.397967 | orchestrator | ok 2025-09-19 06:18:55.402338 | 2025-09-19 06:18:55.402420 | TASK [validate-host : Write out all zuul information for each host] 2025-09-19 06:18:56.416352 | orchestrator -> localhost | changed 2025-09-19 06:18:56.424689 | 2025-09-19 06:18:56.424771 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-19 06:18:56.711681 | orchestrator | ok 2025-09-19 06:18:56.716690 | 2025-09-19 06:18:56.716769 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-19 06:19:37.672007 | orchestrator | changed: 2025-09-19 06:19:37.672314 | orchestrator | .d..t...... src/ 2025-09-19 06:19:37.672355 | orchestrator | .d..t...... src/github.com/ 2025-09-19 06:19:37.672380 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-19 06:19:37.672402 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-19 06:19:37.672424 | orchestrator | RedHat.yml 2025-09-19 06:19:37.716458 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-19 06:19:37.716476 | orchestrator | RedHat.yml 2025-09-19 06:19:37.716529 | orchestrator | = 2.2.0"... 2025-09-19 06:19:48.516735 | orchestrator | 06:19:48.516 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-19 06:19:48.550148 | orchestrator | 06:19:48.549 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-19 06:19:49.184795 | orchestrator | 06:19:49.184 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-19 06:19:49.850874 | orchestrator | 06:19:49.850 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 06:19:49.930006 | orchestrator | 06:19:49.929 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-19 06:19:50.377831 | orchestrator | 06:19:50.377 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 06:19:50.806549 | orchestrator | 06:19:50.806 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-19 06:19:51.567002 | orchestrator | 06:19:51.566 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-19 06:19:51.567074 | orchestrator | 06:19:51.566 STDOUT terraform: Providers are signed by their developers. 2025-09-19 06:19:51.567082 | orchestrator | 06:19:51.566 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-19 06:19:51.567088 | orchestrator | 06:19:51.567 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-19 06:19:51.567123 | orchestrator | 06:19:51.567 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-19 06:19:51.567184 | orchestrator | 06:19:51.567 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-19 06:19:51.567223 | orchestrator | 06:19:51.567 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-19 06:19:51.567294 | orchestrator | 06:19:51.567 STDOUT terraform: you run "tofu init" in the future. 2025-09-19 06:19:51.567300 | orchestrator | 06:19:51.567 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-19 06:19:51.567329 | orchestrator | 06:19:51.567 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-19 06:19:51.567384 | orchestrator | 06:19:51.567 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-19 06:19:51.567392 | orchestrator | 06:19:51.567 STDOUT terraform: should now work. 2025-09-19 06:19:51.567441 | orchestrator | 06:19:51.567 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-19 06:19:51.567490 | orchestrator | 06:19:51.567 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-19 06:19:51.567555 | orchestrator | 06:19:51.567 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-19 06:19:51.699674 | orchestrator | 06:19:51.696 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-19 06:19:51.699894 | orchestrator | 06:19:51.696 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-19 06:19:51.897155 | orchestrator | 06:19:51.896 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-19 06:19:51.897216 | orchestrator | 06:19:51.896 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-19 06:19:51.897225 | orchestrator | 06:19:51.897 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-19 06:19:51.897230 | orchestrator | 06:19:51.897 STDOUT terraform: for this configuration. 2025-09-19 06:19:52.024524 | orchestrator | 06:19:52.024 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-19 06:19:52.024690 | orchestrator | 06:19:52.024 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-19 06:19:52.134476 | orchestrator | 06:19:52.131 STDOUT terraform: ci.auto.tfvars 2025-09-19 06:19:52.134558 | orchestrator | 06:19:52.134 STDOUT terraform: default_custom.tf 2025-09-19 06:19:52.295333 | orchestrator | 06:19:52.295 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-19 06:19:53.383358 | orchestrator | 06:19:53.383 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-19 06:19:53.904935 | orchestrator | 06:19:53.904 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-19 06:19:54.144818 | orchestrator | 06:19:54.144 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-19 06:19:54.144904 | orchestrator | 06:19:54.144 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-19 06:19:54.144911 | orchestrator | 06:19:54.144 STDOUT terraform:  + create 2025-09-19 06:19:54.144916 | orchestrator | 06:19:54.144 STDOUT terraform:  <= read (data resources) 2025-09-19 06:19:54.144922 | orchestrator | 06:19:54.144 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-19 06:19:54.144928 | orchestrator | 06:19:54.144 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-19 06:19:54.144934 | orchestrator | 06:19:54.144 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 06:19:54.144979 | orchestrator | 06:19:54.144 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-19 06:19:54.145006 | orchestrator | 06:19:54.144 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 06:19:54.145034 | orchestrator | 06:19:54.144 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 06:19:54.145063 | orchestrator | 06:19:54.145 STDOUT terraform:  + file = (known after apply) 2025-09-19 06:19:54.145093 | orchestrator | 06:19:54.145 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.145121 | orchestrator | 06:19:54.145 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.145143 | orchestrator | 06:19:54.145 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 06:19:54.145176 | orchestrator | 06:19:54.145 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 06:19:54.145200 | orchestrator | 06:19:54.145 STDOUT terraform:  + most_recent = true 2025-09-19 06:19:54.145218 | orchestrator | 06:19:54.145 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:54.145247 | orchestrator | 06:19:54.145 STDOUT terraform:  + protected = (known after apply) 2025-09-19 06:19:54.145275 | orchestrator | 06:19:54.145 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.145304 | orchestrator | 06:19:54.145 STDOUT terraform:  + schema = (known after apply) 2025-09-19 06:19:54.145331 | orchestrator | 06:19:54.145 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 06:19:54.145359 | orchestrator | 06:19:54.145 STDOUT terraform:  + tags = (known after apply) 2025-09-19 06:19:54.145388 | orchestrator | 06:19:54.145 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 06:19:54.145394 | orchestrator | 06:19:54.145 STDOUT terraform:  } 2025-09-19 06:19:54.145452 | orchestrator | 06:19:54.145 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-19 06:19:54.145460 | orchestrator | 06:19:54.145 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 06:19:54.145512 | orchestrator | 06:19:54.145 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-19 06:19:54.145535 | orchestrator | 06:19:54.145 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 06:19:54.145562 | orchestrator | 06:19:54.145 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 06:19:54.145590 | orchestrator | 06:19:54.145 STDOUT terraform:  + file = (known after apply) 2025-09-19 06:19:54.145618 | orchestrator | 06:19:54.145 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.145645 | orchestrator | 06:19:54.145 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.145672 | orchestrator | 06:19:54.145 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 06:19:54.145700 | orchestrator | 06:19:54.145 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 06:19:54.145723 | orchestrator | 06:19:54.145 STDOUT terraform:  + most_recent = true 2025-09-19 06:19:54.145741 | orchestrator | 06:19:54.145 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:54.145767 | orchestrator | 06:19:54.145 STDOUT terraform:  + protected = (known after apply) 2025-09-19 06:19:54.145795 | orchestrator | 06:19:54.145 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.145823 | orchestrator | 06:19:54.145 STDOUT terraform:  + schema = (known after apply) 2025-09-19 06:19:54.145855 | orchestrator | 06:19:54.145 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 06:19:54.145884 | orchestrator | 06:19:54.145 STDOUT terraform:  + tags = (known after apply) 2025-09-19 06:19:54.145918 | orchestrator | 06:19:54.145 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 06:19:54.145924 | orchestrator | 06:19:54.145 STDOUT terraform:  } 2025-09-19 06:19:54.146294 | orchestrator | 06:19:54.146 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-19 06:19:54.146333 | orchestrator | 06:19:54.146 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-19 06:19:54.146426 | orchestrator | 06:19:54.146 STDOUT terraform:  + content = (known after apply) 2025-09-19 06:19:54.146462 | orchestrator | 06:19:54.146 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 06:19:54.146500 | orchestrator | 06:19:54.146 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 06:19:54.146637 | orchestrator | 06:19:54.146 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 06:19:54.146670 | orchestrator | 06:19:54.146 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 06:19:54.146813 | orchestrator | 06:19:54.146 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 06:19:54.146855 | orchestrator | 06:19:54.146 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 06:19:54.147150 | orchestrator | 06:19:54.146 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 06:19:54.147157 | orchestrator | 06:19:54.147 STDOUT terraform:  + file_permission = "0644" 2025-09-19 06:19:54.147206 | orchestrator | 06:19:54.147 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-19 06:19:54.147240 | orchestrator | 06:19:54.147 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.147246 | orchestrator | 06:19:54.147 STDOUT terraform:  } 2025-09-19 06:19:54.147307 | orchestrator | 06:19:54.147 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-19 06:19:54.147314 | orchestrator | 06:19:54.147 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-19 06:19:54.147367 | orchestrator | 06:19:54.147 STDOUT terraform:  + content = (known after apply) 2025-09-19 06:19:54.147396 | orchestrator | 06:19:54.147 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 06:19:54.147438 | orchestrator | 06:19:54.147 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 06:19:54.147595 | orchestrator | 06:19:54.147 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 06:19:54.147624 | orchestrator | 06:19:54.147 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 06:19:54.147717 | orchestrator | 06:19:54.147 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 06:19:54.147756 | orchestrator | 06:19:54.147 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 06:19:54.147781 | orchestrator | 06:19:54.147 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 06:19:54.147902 | orchestrator | 06:19:54.147 STDOUT terraform:  + file_permission = "0644" 2025-09-19 06:19:54.147910 | orchestrator | 06:19:54.147 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-19 06:19:54.148033 | orchestrator | 06:19:54.147 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.148039 | orchestrator | 06:19:54.148 STDOUT terraform:  } 2025-09-19 06:19:54.148131 | orchestrator | 06:19:54.148 STDOUT terraform:  # local_file.inventory will be created 2025-09-19 06:19:54.148151 | orchestrator | 06:19:54.148 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-19 06:19:54.148239 | orchestrator | 06:19:54.148 STDOUT terraform:  + content = (known after apply) 2025-09-19 06:19:54.148280 | orchestrator | 06:19:54.148 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 06:19:54.148377 | orchestrator | 06:19:54.148 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 06:19:54.148415 | orchestrator | 06:19:54.148 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 06:19:54.148450 | orchestrator | 06:19:54.148 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 06:19:54.148537 | orchestrator | 06:19:54.148 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 06:19:54.148571 | orchestrator | 06:19:54.148 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 06:19:54.148638 | orchestrator | 06:19:54.148 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 06:19:54.148663 | orchestrator | 06:19:54.148 STDOUT terraform:  + file_permission = "0644" 2025-09-19 06:19:54.148745 | orchestrator | 06:19:54.148 STDOUT terraform:  + filename = "inventory.ci" 2025-09-19 06:19:54.148784 | orchestrator | 06:19:54.148 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.148901 | orchestrator | 06:19:54.148 STDOUT terraform:  } 2025-09-19 06:19:54.148931 | orchestrator | 06:19:54.148 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-19 06:19:54.148964 | orchestrator | 06:19:54.148 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-19 06:19:54.149067 | orchestrator | 06:19:54.148 STDOUT terraform:  + content = (sensitive value) 2025-09-19 06:19:54.149106 | orchestrator | 06:19:54.149 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 06:19:54.149186 | orchestrator | 06:19:54.149 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 06:19:54.149224 | orchestrator | 06:19:54.149 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 06:19:54.149305 | orchestrator | 06:19:54.149 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 06:19:54.149344 | orchestrator | 06:19:54.149 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 06:19:54.149408 | orchestrator | 06:19:54.149 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 06:19:54.149435 | orchestrator | 06:19:54.149 STDOUT terraform:  + directory_permission = "0700" 2025-09-19 06:19:54.149460 | orchestrator | 06:19:54.149 STDOUT terraform:  + file_permission = "0600" 2025-09-19 06:19:54.149540 | orchestrator | 06:19:54.149 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-19 06:19:54.149580 | orchestrator | 06:19:54.149 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.149629 | orchestrator | 06:19:54.149 STDOUT terraform:  } 2025-09-19 06:19:54.149663 | orchestrator | 06:19:54.149 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-19 06:19:54.149694 | orchestrator | 06:19:54.149 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-19 06:19:54.149740 | orchestrator | 06:19:54.149 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.149747 | orchestrator | 06:19:54.149 STDOUT terraform:  } 2025-09-19 06:19:54.149833 | orchestrator | 06:19:54.149 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-19 06:19:54.149905 | orchestrator | 06:19:54.149 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-19 06:19:54.150007 | orchestrator | 06:19:54.149 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.158380 | orchestrator | 06:19:54.149 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.158684 | orchestrator | 06:19:54.158 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.160187 | orchestrator | 06:19:54.158 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.160403 | orchestrator | 06:19:54.160 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.160548 | orchestrator | 06:19:54.160 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-19 06:19:54.160656 | orchestrator | 06:19:54.160 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.160732 | orchestrator | 06:19:54.160 STDOUT terraform:  + size = 80 2025-09-19 06:19:54.160806 | orchestrator | 06:19:54.160 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.161010 | orchestrator | 06:19:54.160 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.161113 | orchestrator | 06:19:54.161 STDOUT terraform:  } 2025-09-19 06:19:54.162107 | orchestrator | 06:19:54.161 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-19 06:19:54.162406 | orchestrator | 06:19:54.162 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:54.162506 | orchestrator | 06:19:54.162 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.162691 | orchestrator | 06:19:54.162 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.162866 | orchestrator | 06:19:54.162 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.163007 | orchestrator | 06:19:54.162 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.163181 | orchestrator | 06:19:54.163 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.163327 | orchestrator | 06:19:54.163 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-19 06:19:54.163539 | orchestrator | 06:19:54.163 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.163623 | orchestrator | 06:19:54.163 STDOUT terraform:  + size = 80 2025-09-19 06:19:54.163773 | orchestrator | 06:19:54.163 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.163872 | orchestrator | 06:19:54.163 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.163961 | orchestrator | 06:19:54.163 STDOUT terraform:  } 2025-09-19 06:19:54.164191 | orchestrator | 06:19:54.163 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-19 06:19:54.164501 | orchestrator | 06:19:54.164 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:54.164785 | orchestrator | 06:19:54.164 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.165114 | orchestrator | 06:19:54.164 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.165262 | orchestrator | 06:19:54.165 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.165531 | orchestrator | 06:19:54.165 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.165785 | orchestrator | 06:19:54.165 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.166038 | orchestrator | 06:19:54.165 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-19 06:19:54.166274 | orchestrator | 06:19:54.166 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.166441 | orchestrator | 06:19:54.166 STDOUT terraform:  + size = 80 2025-09-19 06:19:54.166567 | orchestrator | 06:19:54.166 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.166691 | orchestrator | 06:19:54.166 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.166761 | orchestrator | 06:19:54.166 STDOUT terraform:  } 2025-09-19 06:19:54.167076 | orchestrator | 06:19:54.166 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-19 06:19:54.167283 | orchestrator | 06:19:54.167 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:54.167547 | orchestrator | 06:19:54.167 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.167650 | orchestrator | 06:19:54.167 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.167836 | orchestrator | 06:19:54.167 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.167949 | orchestrator | 06:19:54.167 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.168179 | orchestrator | 06:19:54.167 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.168411 | orchestrator | 06:19:54.168 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-19 06:19:54.168578 | orchestrator | 06:19:54.168 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.168745 | orchestrator | 06:19:54.168 STDOUT terraform:  + size = 80 2025-09-19 06:19:54.168858 | orchestrator | 06:19:54.168 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.168993 | orchestrator | 06:19:54.168 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.169068 | orchestrator | 06:19:54.169 STDOUT terraform:  } 2025-09-19 06:19:54.169297 | orchestrator | 06:19:54.169 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-19 06:19:54.169481 | orchestrator | 06:19:54.169 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:54.169710 | orchestrator | 06:19:54.169 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.169908 | orchestrator | 06:19:54.169 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.169999 | orchestrator | 06:19:54.169 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.170277 | orchestrator | 06:19:54.170 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.170525 | orchestrator | 06:19:54.170 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.170807 | orchestrator | 06:19:54.170 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-19 06:19:54.171029 | orchestrator | 06:19:54.170 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.171141 | orchestrator | 06:19:54.171 STDOUT terraform:  + size = 80 2025-09-19 06:19:54.171285 | orchestrator | 06:19:54.171 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.171469 | orchestrator | 06:19:54.171 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.171531 | orchestrator | 06:19:54.171 STDOUT terraform:  } 2025-09-19 06:19:54.171689 | orchestrator | 06:19:54.171 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-19 06:19:54.172031 | orchestrator | 06:19:54.171 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:54.172225 | orchestrator | 06:19:54.172 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.172341 | orchestrator | 06:19:54.172 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.172536 | orchestrator | 06:19:54.172 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.172760 | orchestrator | 06:19:54.172 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.173005 | orchestrator | 06:19:54.172 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.173280 | orchestrator | 06:19:54.173 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-19 06:19:54.173464 | orchestrator | 06:19:54.173 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.173548 | orchestrator | 06:19:54.173 STDOUT terraform:  + size = 80 2025-09-19 06:19:54.173693 | orchestrator | 06:19:54.173 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.173808 | orchestrator | 06:19:54.173 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.173945 | orchestrator | 06:19:54.173 STDOUT terraform:  } 2025-09-19 06:19:54.174127 | orchestrator | 06:19:54.173 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-19 06:19:54.174353 | orchestrator | 06:19:54.174 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:54.174481 | orchestrator | 06:19:54.174 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.174563 | orchestrator | 06:19:54.174 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.174740 | orchestrator | 06:19:54.174 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.174831 | orchestrator | 06:19:54.174 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.175017 | orchestrator | 06:19:54.174 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.175212 | orchestrator | 06:19:54.175 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-19 06:19:54.175400 | orchestrator | 06:19:54.175 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.175488 | orchestrator | 06:19:54.175 STDOUT terraform:  + size = 80 2025-09-19 06:19:54.175645 | orchestrator | 06:19:54.175 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.175801 | orchestrator | 06:19:54.175 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.175984 | orchestrator | 06:19:54.175 STDOUT terraform:  } 2025-09-19 06:19:54.176103 | orchestrator | 06:19:54.176 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-19 06:19:54.176323 | orchestrator | 06:19:54.176 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:54.176436 | orchestrator | 06:19:54.176 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.176602 | orchestrator | 06:19:54.176 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.176730 | orchestrator | 06:19:54.176 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.176888 | orchestrator | 06:19:54.176 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.177012 | orchestrator | 06:19:54.176 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-19 06:19:54.177155 | orchestrator | 06:19:54.177 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.177269 | orchestrator | 06:19:54.177 STDOUT terraform:  + size = 20 2025-09-19 06:19:54.177342 | orchestrator | 06:19:54.177 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.177472 | orchestrator | 06:19:54.177 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.177605 | orchestrator | 06:19:54.177 STDOUT terraform:  } 2025-09-19 06:19:54.177767 | orchestrator | 06:19:54.177 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-19 06:19:54.177974 | orchestrator | 06:19:54.177 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:54.178146 | orchestrator | 06:19:54.178 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.178322 | orchestrator | 06:19:54.178 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.178428 | orchestrator | 06:19:54.178 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.178592 | orchestrator | 06:19:54.178 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.178785 | orchestrator | 06:19:54.178 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-19 06:19:54.179080 | orchestrator | 06:19:54.178 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.179209 | orchestrator | 06:19:54.179 STDOUT terraform:  + size = 20 2025-09-19 06:19:54.179475 | orchestrator | 06:19:54.179 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.179562 | orchestrator | 06:19:54.179 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.179636 | orchestrator | 06:19:54.179 STDOUT terraform:  } 2025-09-19 06:19:54.179865 | orchestrator | 06:19:54.179 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-19 06:19:54.180077 | orchestrator | 06:19:54.179 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:54.180199 | orchestrator | 06:19:54.180 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.180293 | orchestrator | 06:19:54.180 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.180455 | orchestrator | 06:19:54.180 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.180588 | orchestrator | 06:19:54.180 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.180752 | orchestrator | 06:19:54.180 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-19 06:19:54.180923 | orchestrator | 06:19:54.180 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.180956 | orchestrator | 06:19:54.180 STDOUT terraform:  + size = 20 2025-09-19 06:19:54.181052 | orchestrator | 06:19:54.180 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.181105 | orchestrator | 06:19:54.181 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.181226 | orchestrator | 06:19:54.181 STDOUT terraform:  } 2025-09-19 06:19:54.181621 | orchestrator | 06:19:54.181 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-19 06:19:54.181837 | orchestrator | 06:19:54.181 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:54.182161 | orchestrator | 06:19:54.181 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.182325 | orchestrator | 06:19:54.182 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.182463 | orchestrator | 06:19:54.182 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.182578 | orchestrator | 06:19:54.182 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.182705 | orchestrator | 06:19:54.182 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-19 06:19:54.186902 | orchestrator | 06:19:54.186 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.186932 | orchestrator | 06:19:54.186 STDOUT terraform:  + size = 20 2025-09-19 06:19:54.186936 | orchestrator | 06:19:54.186 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.186941 | orchestrator | 06:19:54.186 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.186945 | orchestrator | 06:19:54.186 STDOUT terraform:  } 2025-09-19 06:19:54.186949 | orchestrator | 06:19:54.186 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-19 06:19:54.186953 | orchestrator | 06:19:54.186 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:54.186957 | orchestrator | 06:19:54.186 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.186961 | orchestrator | 06:19:54.186 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.186965 | orchestrator | 06:19:54.186 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.186969 | orchestrator | 06:19:54.186 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.186973 | orchestrator | 06:19:54.186 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-19 06:19:54.186976 | orchestrator | 06:19:54.186 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.186990 | orchestrator | 06:19:54.186 STDOUT terraform:  + size = 20 2025-09-19 06:19:54.186994 | orchestrator | 06:19:54.186 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.186998 | orchestrator | 06:19:54.186 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.187001 | orchestrator | 06:19:54.186 STDOUT terraform:  } 2025-09-19 06:19:54.187005 | orchestrator | 06:19:54.186 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-19 06:19:54.187009 | orchestrator | 06:19:54.186 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:54.187013 | orchestrator | 06:19:54.186 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.187016 | orchestrator | 06:19:54.186 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.187020 | orchestrator | 06:19:54.186 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.187024 | orchestrator | 06:19:54.186 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.187028 | orchestrator | 06:19:54.186 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-19 06:19:54.187032 | orchestrator | 06:19:54.186 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.187041 | orchestrator | 06:19:54.186 STDOUT terraform:  + size = 20 2025-09-19 06:19:54.187045 | orchestrator | 06:19:54.186 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.187053 | orchestrator | 06:19:54.186 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.187057 | orchestrator | 06:19:54.186 STDOUT terraform:  } 2025-09-19 06:19:54.187061 | orchestrator | 06:19:54.186 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-19 06:19:54.187065 | orchestrator | 06:19:54.186 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:54.187069 | orchestrator | 06:19:54.186 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.187085 | orchestrator | 06:19:54.187 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.187091 | orchestrator | 06:19:54.187 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.187095 | orchestrator | 06:19:54.187 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.187453 | orchestrator | 06:19:54.187 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-19 06:19:54.187459 | orchestrator | 06:19:54.187 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.187463 | orchestrator | 06:19:54.187 STDOUT terraform:  + size = 20 2025-09-19 06:19:54.187467 | orchestrator | 06:19:54.187 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.187471 | orchestrator | 06:19:54.187 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.187474 | orchestrator | 06:19:54.187 STDOUT terraform:  } 2025-09-19 06:19:54.187478 | orchestrator | 06:19:54.187 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-19 06:19:54.187482 | orchestrator | 06:19:54.187 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:54.187491 | orchestrator | 06:19:54.187 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.187494 | orchestrator | 06:19:54.187 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.187498 | orchestrator | 06:19:54.187 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.187502 | orchestrator | 06:19:54.187 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.187506 | orchestrator | 06:19:54.187 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-19 06:19:54.187511 | orchestrator | 06:19:54.187 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.187515 | orchestrator | 06:19:54.187 STDOUT terraform:  + size = 20 2025-09-19 06:19:54.187519 | orchestrator | 06:19:54.187 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.187524 | orchestrator | 06:19:54.187 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.187538 | orchestrator | 06:19:54.187 STDOUT terraform:  } 2025-09-19 06:19:54.187587 | orchestrator | 06:19:54.187 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-19 06:19:54.187628 | orchestrator | 06:19:54.187 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:54.187662 | orchestrator | 06:19:54.187 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:54.187687 | orchestrator | 06:19:54.187 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.187721 | orchestrator | 06:19:54.187 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.187757 | orchestrator | 06:19:54.187 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:54.187793 | orchestrator | 06:19:54.187 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-19 06:19:54.187829 | orchestrator | 06:19:54.187 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.187874 | orchestrator | 06:19:54.187 STDOUT terraform:  + size = 20 2025-09-19 06:19:54.187898 | orchestrator | 06:19:54.187 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:54.187924 | orchestrator | 06:19:54.187 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:54.187930 | orchestrator | 06:19:54.187 STDOUT terraform:  } 2025-09-19 06:19:54.187978 | orchestrator | 06:19:54.187 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-19 06:19:54.188018 | orchestrator | 06:19:54.187 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-19 06:19:54.188051 | orchestrator | 06:19:54.188 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:54.188085 | orchestrator | 06:19:54.188 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:54.188120 | orchestrator | 06:19:54.188 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:54.188154 | orchestrator | 06:19:54.188 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.188177 | orchestrator | 06:19:54.188 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.188198 | orchestrator | 06:19:54.188 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:54.188232 | orchestrator | 06:19:54.188 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:54.188267 | orchestrator | 06:19:54.188 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:54.188297 | orchestrator | 06:19:54.188 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-19 06:19:54.188321 | orchestrator | 06:19:54.188 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:54.188354 | orchestrator | 06:19:54.188 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:54.188389 | orchestrator | 06:19:54.188 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.188423 | orchestrator | 06:19:54.188 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.188458 | orchestrator | 06:19:54.188 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:54.188482 | orchestrator | 06:19:54.188 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:54.188512 | orchestrator | 06:19:54.188 STDOUT terraform:  + name = "testbed-manager" 2025-09-19 06:19:54.188538 | orchestrator | 06:19:54.188 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:54.188574 | orchestrator | 06:19:54.188 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.188607 | orchestrator | 06:19:54.188 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:54.188629 | orchestrator | 06:19:54.188 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:54.188664 | orchestrator | 06:19:54.188 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:54.188694 | orchestrator | 06:19:54.188 STDOUT terraform:  + user_data = (sensitive value) 2025-09-19 06:19:54.188711 | orchestrator | 06:19:54.188 STDOUT terraform:  + block_device { 2025-09-19 06:19:54.188745 | orchestrator | 06:19:54.188 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:54.188774 | orchestrator | 06:19:54.188 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:54.188805 | orchestrator | 06:19:54.188 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:54.188833 | orchestrator | 06:19:54.188 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:54.188875 | orchestrator | 06:19:54.188 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:54.188913 | orchestrator | 06:19:54.188 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.188930 | orchestrator | 06:19:54.188 STDOUT terraform:  } 2025-09-19 06:19:54.188937 | orchestrator | 06:19:54.188 STDOUT terraform:  + network { 2025-09-19 06:19:54.188963 | orchestrator | 06:19:54.188 STDOUT terraform:  + access_network = false 2025-09-19 06:19:54.188994 | orchestrator | 06:19:54.188 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:54.189024 | orchestrator | 06:19:54.188 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:54.189054 | orchestrator | 06:19:54.189 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:54.189084 | orchestrator | 06:19:54.189 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:54.189114 | orchestrator | 06:19:54.189 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:54.189144 | orchestrator | 06:19:54.189 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.189168 | orchestrator | 06:19:54.189 STDOUT terraform:  } 2025-09-19 06:19:54.189185 | orchestrator | 06:19:54.189 STDOUT terraform:  } 2025-09-19 06:19:54.189226 | orchestrator | 06:19:54.189 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-19 06:19:54.189269 | orchestrator | 06:19:54.189 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:54.189304 | orchestrator | 06:19:54.189 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:54.189341 | orchestrator | 06:19:54.189 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:54.189373 | orchestrator | 06:19:54.189 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:54.189407 | orchestrator | 06:19:54.189 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.189432 | orchestrator | 06:19:54.189 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.189453 | orchestrator | 06:19:54.189 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:54.189489 | orchestrator | 06:19:54.189 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:54.189524 | orchestrator | 06:19:54.189 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:54.189553 | orchestrator | 06:19:54.189 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:54.189576 | orchestrator | 06:19:54.189 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:54.189610 | orchestrator | 06:19:54.189 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:54.189645 | orchestrator | 06:19:54.189 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.189679 | orchestrator | 06:19:54.189 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.189716 | orchestrator | 06:19:54.189 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:54.189743 | orchestrator | 06:19:54.189 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:54.189773 | orchestrator | 06:19:54.189 STDOUT terraform:  + name = "testbed-node-0" 2025-09-19 06:19:54.189796 | orchestrator | 06:19:54.189 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:54.189831 | orchestrator | 06:19:54.189 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.189999 | orchestrator | 06:19:54.189 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:54.190593 | orchestrator | 06:19:54.189 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:54.190812 | orchestrator | 06:19:54.189 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:54.190976 | orchestrator | 06:19:54.189 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:54.191057 | orchestrator | 06:19:54.189 STDOUT terraform:  + block_device { 2025-09-19 06:19:54.191241 | orchestrator | 06:19:54.189 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:54.191291 | orchestrator | 06:19:54.190 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:54.191345 | orchestrator | 06:19:54.190 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:54.191391 | orchestrator | 06:19:54.190 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:54.191404 | orchestrator | 06:19:54.190 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:54.191459 | orchestrator | 06:19:54.190 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.191602 | orchestrator | 06:19:54.190 STDOUT terraform:  } 2025-09-19 06:19:54.191803 | orchestrator | 06:19:54.190 STDOUT terraform:  + network { 2025-09-19 06:19:54.191926 | orchestrator | 06:19:54.190 STDOUT terraform:  + access_network = false 2025-09-19 06:19:54.192016 | orchestrator | 06:19:54.190 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:54.192224 | orchestrator | 06:19:54.190 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:54.192252 | orchestrator | 06:19:54.190 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:54.192279 | orchestrator | 06:19:54.190 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:54.192394 | orchestrator | 06:19:54.190 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:54.192941 | orchestrator | 06:19:54.190 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.192982 | orchestrator | 06:19:54.190 STDOUT terraform:  } 2025-09-19 06:19:54.193130 | orchestrator | 06:19:54.190 STDOUT terraform:  } 2025-09-19 06:19:54.193295 | orchestrator | 06:19:54.190 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-19 06:19:54.193566 | orchestrator | 06:19:54.191 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:54.193654 | orchestrator | 06:19:54.191 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:54.194001 | orchestrator | 06:19:54.191 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:54.194073 | orchestrator | 06:19:54.191 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:54.194123 | orchestrator | 06:19:54.191 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.194171 | orchestrator | 06:19:54.191 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.194232 | orchestrator | 06:19:54.191 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:54.194295 | orchestrator | 06:19:54.191 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:54.194308 | orchestrator | 06:19:54.191 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:54.194710 | orchestrator | 06:19:54.191 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:54.195142 | orchestrator | 06:19:54.192 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:54.195768 | orchestrator | 06:19:54.192 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:54.195901 | orchestrator | 06:19:54.192 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.196020 | orchestrator | 06:19:54.192 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.196394 | orchestrator | 06:19:54.192 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:54.196426 | orchestrator | 06:19:54.192 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:54.196511 | orchestrator | 06:19:54.192 STDOUT terraform:  + name = "testbed-node-1" 2025-09-19 06:19:54.196636 | orchestrator | 06:19:54.192 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:54.196891 | orchestrator | 06:19:54.192 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.196906 | orchestrator | 06:19:54.195 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:54.196916 | orchestrator | 06:19:54.195 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:54.196926 | orchestrator | 06:19:54.195 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:54.196936 | orchestrator | 06:19:54.195 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:54.196946 | orchestrator | 06:19:54.195 STDOUT terraform:  + block_device { 2025-09-19 06:19:54.196956 | orchestrator | 06:19:54.195 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:54.197034 | orchestrator | 06:19:54.195 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:54.197048 | orchestrator | 06:19:54.196 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:54.197093 | orchestrator | 06:19:54.196 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:54.197105 | orchestrator | 06:19:54.196 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:54.197116 | orchestrator | 06:19:54.196 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.197173 | orchestrator | 06:19:54.196 STDOUT terraform:  } 2025-09-19 06:19:54.197184 | orchestrator | 06:19:54.196 STDOUT terraform:  + network { 2025-09-19 06:19:54.197536 | orchestrator | 06:19:54.196 STDOUT terraform:  + access_network = false 2025-09-19 06:19:54.197550 | orchestrator | 06:19:54.196 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:54.197597 | orchestrator | 06:19:54.196 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:54.197608 | orchestrator | 06:19:54.196 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:54.197730 | orchestrator | 06:19:54.196 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:54.197746 | orchestrator | 06:19:54.196 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:54.197774 | orchestrator | 06:19:54.196 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.197784 | orchestrator | 06:19:54.196 STDOUT terraform:  } 2025-09-19 06:19:54.197838 | orchestrator | 06:19:54.196 STDOUT terraform:  } 2025-09-19 06:19:54.197962 | orchestrator | 06:19:54.196 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-19 06:19:54.198052 | orchestrator | 06:19:54.196 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:54.198078 | orchestrator | 06:19:54.197 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:54.198128 | orchestrator | 06:19:54.197 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:54.198138 | orchestrator | 06:19:54.197 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:54.198229 | orchestrator | 06:19:54.197 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.198240 | orchestrator | 06:19:54.197 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.198454 | orchestrator | 06:19:54.197 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:54.198465 | orchestrator | 06:19:54.197 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:54.198595 | orchestrator | 06:19:54.197 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:54.198636 | orchestrator | 06:19:54.197 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:54.198679 | orchestrator | 06:19:54.197 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:54.198744 | orchestrator | 06:19:54.197 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:54.198771 | orchestrator | 06:19:54.197 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.198824 | orchestrator | 06:19:54.198 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.198837 | orchestrator | 06:19:54.198 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:54.198921 | orchestrator | 06:19:54.198 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:54.199015 | orchestrator | 06:19:54.198 STDOUT terraform:  + name = "testbed-node-2" 2025-09-19 06:19:54.199097 | orchestrator | 06:19:54.198 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:54.199134 | orchestrator | 06:19:54.198 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.199192 | orchestrator | 06:19:54.198 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:54.199217 | orchestrator | 06:19:54.198 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:54.199259 | orchestrator | 06:19:54.198 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:54.199322 | orchestrator | 06:19:54.198 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:54.199872 | orchestrator | 06:19:54.198 STDOUT terraform:  + block_device { 2025-09-19 06:19:54.199910 | orchestrator | 06:19:54.198 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:54.199940 | orchestrator | 06:19:54.198 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:54.199968 | orchestrator | 06:19:54.198 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:54.199990 | orchestrator | 06:19:54.198 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:54.200025 | orchestrator | 06:19:54.199 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:54.200046 | orchestrator | 06:19:54.199 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.200100 | orchestrator | 06:19:54.199 STDOUT terraform:  } 2025-09-19 06:19:54.200109 | orchestrator | 06:19:54.199 STDOUT terraform:  + network { 2025-09-19 06:19:54.200144 | orchestrator | 06:19:54.199 STDOUT terraform:  + access_network = false 2025-09-19 06:19:54.200178 | orchestrator | 06:19:54.199 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:54.200187 | orchestrator | 06:19:54.199 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:54.200217 | orchestrator | 06:19:54.199 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:54.200237 | orchestrator | 06:19:54.199 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:54.200282 | orchestrator | 06:19:54.199 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:54.200318 | orchestrator | 06:19:54.199 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.200339 | orchestrator | 06:19:54.199 STDOUT terraform:  } 2025-09-19 06:19:54.200373 | orchestrator | 06:19:54.199 STDOUT terraform:  } 2025-09-19 06:19:54.200394 | orchestrator | 06:19:54.199 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-19 06:19:54.200430 | orchestrator | 06:19:54.199 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:54.200439 | orchestrator | 06:19:54.200 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:54.200469 | orchestrator | 06:19:54.200 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:54.200496 | orchestrator | 06:19:54.200 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:54.200529 | orchestrator | 06:19:54.200 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.200709 | orchestrator | 06:19:54.200 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.200730 | orchestrator | 06:19:54.200 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:54.200753 | orchestrator | 06:19:54.200 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:54.200787 | orchestrator | 06:19:54.200 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:54.200808 | orchestrator | 06:19:54.200 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:54.202045 | orchestrator | 06:19:54.200 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:54.202311 | orchestrator | 06:19:54.201 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:54.202404 | orchestrator | 06:19:54.201 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.202463 | orchestrator | 06:19:54.202 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.202471 | orchestrator | 06:19:54.202 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:54.202562 | orchestrator | 06:19:54.202 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:54.202741 | orchestrator | 06:19:54.202 STDOUT terraform:  + name = "testbed-node-3" 2025-09-19 06:19:54.203068 | orchestrator | 06:19:54.202 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:54.203123 | orchestrator | 06:19:54.202 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.203257 | orchestrator | 06:19:54.202 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:54.203489 | orchestrator | 06:19:54.202 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:54.203592 | orchestrator | 06:19:54.202 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:54.203602 | orchestrator | 06:19:54.202 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:54.203621 | orchestrator | 06:19:54.202 STDOUT terraform:  + block_device { 2025-09-19 06:19:54.203652 | orchestrator | 06:19:54.202 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:54.203697 | orchestrator | 06:19:54.203 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:54.203931 | orchestrator | 06:19:54.203 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:54.203941 | orchestrator | 06:19:54.203 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:54.203947 | orchestrator | 06:19:54.203 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:54.203955 | orchestrator | 06:19:54.203 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.203961 | orchestrator | 06:19:54.203 STDOUT terraform:  } 2025-09-19 06:19:54.203968 | orchestrator | 06:19:54.203 STDOUT terraform:  + network { 2025-09-19 06:19:54.203976 | orchestrator | 06:19:54.203 STDOUT terraform:  + access_network = false 2025-09-19 06:19:54.204141 | orchestrator | 06:19:54.203 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:54.204150 | orchestrator | 06:19:54.203 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:54.204156 | orchestrator | 06:19:54.203 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:54.204208 | orchestrator | 06:19:54.203 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:54.204230 | orchestrator | 06:19:54.203 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:54.204249 | orchestrator | 06:19:54.203 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.204256 | orchestrator | 06:19:54.203 STDOUT terraform:  } 2025-09-19 06:19:54.204263 | orchestrator | 06:19:54.203 STDOUT terraform:  } 2025-09-19 06:19:54.204369 | orchestrator | 06:19:54.203 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-19 06:19:54.204379 | orchestrator | 06:19:54.204 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:54.204558 | orchestrator | 06:19:54.204 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:54.204777 | orchestrator | 06:19:54.204 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:54.204788 | orchestrator | 06:19:54.204 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:54.204795 | orchestrator | 06:19:54.204 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.204886 | orchestrator | 06:19:54.204 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.205010 | orchestrator | 06:19:54.204 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:54.205056 | orchestrator | 06:19:54.204 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:54.205132 | orchestrator | 06:19:54.204 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:54.205225 | orchestrator | 06:19:54.204 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:54.205246 | orchestrator | 06:19:54.204 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:54.205277 | orchestrator | 06:19:54.204 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:54.205287 | orchestrator | 06:19:54.204 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.205367 | orchestrator | 06:19:54.205 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.205471 | orchestrator | 06:19:54.205 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:54.205495 | orchestrator | 06:19:54.205 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:54.205506 | orchestrator | 06:19:54.205 STDOUT terraform:  + name = "testbed-node-4" 2025-09-19 06:19:54.205536 | orchestrator | 06:19:54.205 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:54.205543 | orchestrator | 06:19:54.205 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.205690 | orchestrator | 06:19:54.205 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:54.205700 | orchestrator | 06:19:54.205 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:54.205784 | orchestrator | 06:19:54.205 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:54.205999 | orchestrator | 06:19:54.205 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:54.206177 | orchestrator | 06:19:54.206 STDOUT terraform:  + block_device { 2025-09-19 06:19:54.206200 | orchestrator | 06:19:54.206 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:54.206272 | orchestrator | 06:19:54.206 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:54.206335 | orchestrator | 06:19:54.206 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:54.206428 | orchestrator | 06:19:54.206 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:54.206534 | orchestrator | 06:19:54.206 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:54.206635 | orchestrator | 06:19:54.206 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.206815 | orchestrator | 06:19:54.206 STDOUT terraform:  } 2025-09-19 06:19:54.206836 | orchestrator | 06:19:54.206 STDOUT terraform:  + network { 2025-09-19 06:19:54.206964 | orchestrator | 06:19:54.206 STDOUT terraform:  + access_network = false 2025-09-19 06:19:54.206984 | orchestrator | 06:19:54.206 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:54.207082 | orchestrator | 06:19:54.206 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:54.207110 | orchestrator | 06:19:54.206 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:54.207198 | orchestrator | 06:19:54.206 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:54.207252 | orchestrator | 06:19:54.207 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:54.207298 | orchestrator | 06:19:54.207 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.207351 | orchestrator | 06:19:54.207 STDOUT terraform:  } 2025-09-19 06:19:54.207392 | orchestrator | 06:19:54.207 STDOUT terraform:  } 2025-09-19 06:19:54.207468 | orchestrator | 06:19:54.207 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-19 06:19:54.207522 | orchestrator | 06:19:54.207 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:54.207659 | orchestrator | 06:19:54.207 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:54.207761 | orchestrator | 06:19:54.207 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:54.207987 | orchestrator | 06:19:54.207 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:54.208100 | orchestrator | 06:19:54.208 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.208193 | orchestrator | 06:19:54.208 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:54.208252 | orchestrator | 06:19:54.208 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:54.208389 | orchestrator | 06:19:54.208 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:54.208522 | orchestrator | 06:19:54.208 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:54.208587 | orchestrator | 06:19:54.208 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:54.208608 | orchestrator | 06:19:54.208 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:54.208780 | orchestrator | 06:19:54.208 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:54.208803 | orchestrator | 06:19:54.208 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.208936 | orchestrator | 06:19:54.208 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:54.209027 | orchestrator | 06:19:54.208 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:54.209105 | orchestrator | 06:19:54.208 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:54.209181 | orchestrator | 06:19:54.209 STDOUT terraform:  + name = "testbed-node-5" 2025-09-19 06:19:54.209210 | orchestrator | 06:19:54.209 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:54.209290 | orchestrator | 06:19:54.209 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.209386 | orchestrator | 06:19:54.209 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:54.209502 | orchestrator | 06:19:54.209 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:54.209624 | orchestrator | 06:19:54.209 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:54.209732 | orchestrator | 06:19:54.209 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:54.209753 | orchestrator | 06:19:54.209 STDOUT terraform:  + block_device { 2025-09-19 06:19:54.209883 | orchestrator | 06:19:54.209 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:54.209977 | orchestrator | 06:19:54.209 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:54.210236 | orchestrator | 06:19:54.209 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:54.210273 | orchestrator | 06:19:54.210 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:54.210326 | orchestrator | 06:19:54.210 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:54.210595 | orchestrator | 06:19:54.210 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.210614 | orchestrator | 06:19:54.210 STDOUT terraform:  } 2025-09-19 06:19:54.210697 | orchestrator | 06:19:54.210 STDOUT terraform:  + network { 2025-09-19 06:19:54.210704 | orchestrator | 06:19:54.210 STDOUT terraform:  + access_network = false 2025-09-19 06:19:54.210747 | orchestrator | 06:19:54.210 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:54.210826 | orchestrator | 06:19:54.210 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:54.210903 | orchestrator | 06:19:54.210 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:54.210910 | orchestrator | 06:19:54.210 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:54.210962 | orchestrator | 06:19:54.210 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:54.211042 | orchestrator | 06:19:54.210 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:54.211076 | orchestrator | 06:19:54.211 STDOUT terraform:  } 2025-09-19 06:19:54.211216 | orchestrator | 06:19:54.211 STDOUT terraform:  } 2025-09-19 06:19:54.211224 | orchestrator | 06:19:54.211 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-19 06:19:54.211243 | orchestrator | 06:19:54.211 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-19 06:19:54.211359 | orchestrator | 06:19:54.211 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-19 06:19:54.211466 | orchestrator | 06:19:54.211 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.211473 | orchestrator | 06:19:54.211 STDOUT terraform:  + name = "testbed" 2025-09-19 06:19:54.211500 | orchestrator | 06:19:54.211 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 06:19:54.211630 | orchestrator | 06:19:54.211 STDOUT terraform:  + public_key = (known after apply) 2025-09-19 06:19:54.211698 | orchestrator | 06:19:54.211 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.211794 | orchestrator | 06:19:54.211 STDOUT terraform:  + user_id = (known after apply) 2025-09-19 06:19:54.211864 | orchestrator | 06:19:54.211 STDOUT terraform:  } 2025-09-19 06:19:54.212072 | orchestrator | 06:19:54.211 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-19 06:19:54.212295 | orchestrator | 06:19:54.212 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:54.212371 | orchestrator | 06:19:54.212 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:54.212573 | orchestrator | 06:19:54.212 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.212633 | orchestrator | 06:19:54.212 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:54.212650 | orchestrator | 06:19:54.212 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.212674 | orchestrator | 06:19:54.212 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:54.212682 | orchestrator | 06:19:54.212 STDOUT terraform:  } 2025-09-19 06:19:54.212915 | orchestrator | 06:19:54.212 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-19 06:19:54.213049 | orchestrator | 06:19:54.212 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:54.213109 | orchestrator | 06:19:54.213 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:54.213159 | orchestrator | 06:19:54.213 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.213246 | orchestrator | 06:19:54.213 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:54.213298 | orchestrator | 06:19:54.213 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.213434 | orchestrator | 06:19:54.213 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:54.213451 | orchestrator | 06:19:54.213 STDOUT terraform:  } 2025-09-19 06:19:54.213498 | orchestrator | 06:19:54.213 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-19 06:19:54.213633 | orchestrator | 06:19:54.213 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:54.213762 | orchestrator | 06:19:54.213 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:54.213790 | orchestrator | 06:19:54.213 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.213923 | orchestrator | 06:19:54.213 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:54.213962 | orchestrator | 06:19:54.213 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.214109 | orchestrator | 06:19:54.213 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:54.214187 | orchestrator | 06:19:54.214 STDOUT terraform:  } 2025-09-19 06:19:54.214291 | orchestrator | 06:19:54.214 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-19 06:19:54.214463 | orchestrator | 06:19:54.214 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:54.214585 | orchestrator | 06:19:54.214 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:54.214734 | orchestrator | 06:19:54.214 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.214795 | orchestrator | 06:19:54.214 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:54.214813 | orchestrator | 06:19:54.214 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.214821 | orchestrator | 06:19:54.214 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:54.214838 | orchestrator | 06:19:54.214 STDOUT terraform:  } 2025-09-19 06:19:54.215069 | orchestrator | 06:19:54.214 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-19 06:19:54.215183 | orchestrator | 06:19:54.214 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:54.215227 | orchestrator | 06:19:54.215 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:54.215277 | orchestrator | 06:19:54.215 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.215400 | orchestrator | 06:19:54.215 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:54.215536 | orchestrator | 06:19:54.215 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.215763 | orchestrator | 06:19:54.215 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:54.215796 | orchestrator | 06:19:54.215 STDOUT terraform:  } 2025-09-19 06:19:54.215927 | orchestrator | 06:19:54.215 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-19 06:19:54.216093 | orchestrator | 06:19:54.215 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:54.216198 | orchestrator | 06:19:54.216 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:54.216289 | orchestrator | 06:19:54.216 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.216482 | orchestrator | 06:19:54.216 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:54.216539 | orchestrator | 06:19:54.216 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.216611 | orchestrator | 06:19:54.216 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:54.216659 | orchestrator | 06:19:54.216 STDOUT terraform:  } 2025-09-19 06:19:54.216768 | orchestrator | 06:19:54.216 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-19 06:19:54.216827 | orchestrator | 06:19:54.216 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:54.216865 | orchestrator | 06:19:54.216 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:54.216893 | orchestrator | 06:19:54.216 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.216923 | orchestrator | 06:19:54.216 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:54.216953 | orchestrator | 06:19:54.216 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.216982 | orchestrator | 06:19:54.216 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:54.216990 | orchestrator | 06:19:54.216 STDOUT terraform:  } 2025-09-19 06:19:54.217039 | orchestrator | 06:19:54.216 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-19 06:19:54.217085 | orchestrator | 06:19:54.217 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:54.217114 | orchestrator | 06:19:54.217 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:54.217143 | orchestrator | 06:19:54.217 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.217170 | orchestrator | 06:19:54.217 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:54.217199 | orchestrator | 06:19:54.217 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.217233 | orchestrator | 06:19:54.217 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:54.217239 | orchestrator | 06:19:54.217 STDOUT terraform:  } 2025-09-19 06:19:54.217283 | orchestrator | 06:19:54.217 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-19 06:19:54.217330 | orchestrator | 06:19:54.217 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:54.217364 | orchestrator | 06:19:54.217 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:54.217385 | orchestrator | 06:19:54.217 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.217410 | orchestrator | 06:19:54.217 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:54.217439 | orchestrator | 06:19:54.217 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.217465 | orchestrator | 06:19:54.217 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:54.217473 | orchestrator | 06:19:54.217 STDOUT terraform:  } 2025-09-19 06:19:54.217665 | orchestrator | 06:19:54.217 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-19 06:19:54.217886 | orchestrator | 06:19:54.217 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-19 06:19:54.218112 | orchestrator | 06:19:54.217 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 06:19:54.218178 | orchestrator | 06:19:54.218 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-19 06:19:54.218273 | orchestrator | 06:19:54.218 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.218437 | orchestrator | 06:19:54.218 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 06:19:54.218520 | orchestrator | 06:19:54.218 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.218631 | orchestrator | 06:19:54.218 STDOUT terraform:  } 2025-09-19 06:19:54.218720 | orchestrator | 06:19:54.218 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-19 06:19:54.218918 | orchestrator | 06:19:54.218 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-19 06:19:54.218979 | orchestrator | 06:19:54.218 STDOUT terraform:  + address = (known after apply) 2025-09-19 06:19:54.219115 | orchestrator | 06:19:54.218 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.219161 | orchestrator | 06:19:54.219 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 06:19:54.219263 | orchestrator | 06:19:54.219 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:54.219994 | orchestrator | 06:19:54.219 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 06:19:54.220162 | orchestrator | 06:19:54.219 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.220314 | orchestrator | 06:19:54.220 STDOUT terraform:  + pool = "public" 2025-09-19 06:19:54.220435 | orchestrator | 06:19:54.220 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 06:19:54.220503 | orchestrator | 06:19:54.220 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.220631 | orchestrator | 06:19:54.220 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:54.220777 | orchestrator | 06:19:54.220 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.220792 | orchestrator | 06:19:54.220 STDOUT terraform:  } 2025-09-19 06:19:54.220998 | orchestrator | 06:19:54.220 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-19 06:19:54.221089 | orchestrator | 06:19:54.220 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-19 06:19:54.221218 | orchestrator | 06:19:54.221 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:54.221393 | orchestrator | 06:19:54.221 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.221441 | orchestrator | 06:19:54.221 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 06:19:54.221447 | orchestrator | 06:19:54.221 STDOUT terraform:  + "nova", 2025-09-19 06:19:54.221490 | orchestrator | 06:19:54.221 STDOUT terraform:  ] 2025-09-19 06:19:54.221507 | orchestrator | 06:19:54.221 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 06:19:54.221645 | orchestrator | 06:19:54.221 STDOUT terraform:  + external = (known after apply) 2025-09-19 06:19:54.221784 | orchestrator | 06:19:54.221 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.221831 | orchestrator | 06:19:54.221 STDOUT terraform:  + mtu = (known after apply) 2025-09-19 06:19:54.221968 | orchestrator | 06:19:54.221 STDOUT terraform:  + name = "net-testbed-management" 2025-09-19 06:19:54.222032 | orchestrator | 06:19:54.221 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:54.222039 | orchestrator | 06:19:54.221 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:54.222090 | orchestrator | 06:19:54.222 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.222265 | orchestrator | 06:19:54.222 STDOUT terraform:  + shared = (known after apply) 2025-09-19 06:19:54.222321 | orchestrator | 06:19:54.222 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.222367 | orchestrator | 06:19:54.222 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-19 06:19:54.222477 | orchestrator | 06:19:54.222 STDOUT terraform:  + segments (known after apply) 2025-09-19 06:19:54.222502 | orchestrator | 06:19:54.222 STDOUT terraform:  } 2025-09-19 06:19:54.222628 | orchestrator | 06:19:54.222 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-19 06:19:54.222795 | orchestrator | 06:19:54.222 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-19 06:19:54.223099 | orchestrator | 06:19:54.222 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:54.223151 | orchestrator | 06:19:54.223 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:54.223316 | orchestrator | 06:19:54.223 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:54.223557 | orchestrator | 06:19:54.223 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.223626 | orchestrator | 06:19:54.223 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:54.223703 | orchestrator | 06:19:54.223 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:54.223793 | orchestrator | 06:19:54.223 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:54.227292 | orchestrator | 06:19:54.223 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:54.227311 | orchestrator | 06:19:54.227 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.227348 | orchestrator | 06:19:54.227 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:54.227384 | orchestrator | 06:19:54.227 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:54.227419 | orchestrator | 06:19:54.227 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:54.227482 | orchestrator | 06:19:54.227 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:54.235011 | orchestrator | 06:19:54.227 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.235116 | orchestrator | 06:19:54.234 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:54.235773 | orchestrator | 06:19:54.234 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.235897 | orchestrator | 06:19:54.235 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.235910 | orchestrator | 06:19:54.235 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:54.235916 | orchestrator | 06:19:54.235 STDOUT terraform:  } 2025-09-19 06:19:54.235926 | orchestrator | 06:19:54.235 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.235931 | orchestrator | 06:19:54.235 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:54.235959 | orchestrator | 06:19:54.235 STDOUT terraform:  } 2025-09-19 06:19:54.235974 | orchestrator | 06:19:54.235 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:54.235987 | orchestrator | 06:19:54.235 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:54.235991 | orchestrator | 06:19:54.235 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-19 06:19:54.235995 | orchestrator | 06:19:54.235 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:54.236054 | orchestrator | 06:19:54.235 STDOUT terraform:  } 2025-09-19 06:19:54.236059 | orchestrator | 06:19:54.235 STDOUT terraform:  } 2025-09-19 06:19:54.236071 | orchestrator | 06:19:54.235 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-19 06:19:54.236078 | orchestrator | 06:19:54.236 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:54.236128 | orchestrator | 06:19:54.236 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:54.236193 | orchestrator | 06:19:54.236 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:54.236200 | orchestrator | 06:19:54.236 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:54.236268 | orchestrator | 06:19:54.236 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.236321 | orchestrator | 06:19:54.236 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:54.236375 | orchestrator | 06:19:54.236 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:54.236388 | orchestrator | 06:19:54.236 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:54.236394 | orchestrator | 06:19:54.236 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:54.236398 | orchestrator | 06:19:54.236 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.236425 | orchestrator | 06:19:54.236 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:54.236438 | orchestrator | 06:19:54.236 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:54.236471 | orchestrator | 06:19:54.236 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:54.236513 | orchestrator | 06:19:54.236 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:54.236547 | orchestrator | 06:19:54.236 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.236637 | orchestrator | 06:19:54.236 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:54.236644 | orchestrator | 06:19:54.236 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.236648 | orchestrator | 06:19:54.236 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.236652 | orchestrator | 06:19:54.236 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:54.236667 | orchestrator | 06:19:54.236 STDOUT terraform:  } 2025-09-19 06:19:54.236680 | orchestrator | 06:19:54.236 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.236693 | orchestrator | 06:19:54.236 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:54.236706 | orchestrator | 06:19:54.236 STDOUT terraform:  } 2025-09-19 06:19:54.236760 | orchestrator | 06:19:54.236 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.236766 | orchestrator | 06:19:54.236 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:54.236770 | orchestrator | 06:19:54.236 STDOUT terraform:  } 2025-09-19 06:19:54.236983 | orchestrator | 06:19:54.236 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.236997 | orchestrator | 06:19:54.236 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:54.237001 | orchestrator | 06:19:54.236 STDOUT terraform:  } 2025-09-19 06:19:54.237005 | orchestrator | 06:19:54.236 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:54.237016 | orchestrator | 06:19:54.236 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:54.237020 | orchestrator | 06:19:54.236 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-19 06:19:54.237050 | orchestrator | 06:19:54.236 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:54.237055 | orchestrator | 06:19:54.236 STDOUT terraform:  } 2025-09-19 06:19:54.237067 | orchestrator | 06:19:54.236 STDOUT terraform:  } 2025-09-19 06:19:54.237071 | orchestrator | 06:19:54.236 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-19 06:19:54.237093 | orchestrator | 06:19:54.236 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:54.237110 | orchestrator | 06:19:54.236 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:54.237114 | orchestrator | 06:19:54.237 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:54.237118 | orchestrator | 06:19:54.237 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:54.237126 | orchestrator | 06:19:54.237 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.237181 | orchestrator | 06:19:54.237 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:54.237194 | orchestrator | 06:19:54.237 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:54.237249 | orchestrator | 06:19:54.237 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:54.237262 | orchestrator | 06:19:54.237 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:54.237311 | orchestrator | 06:19:54.237 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.237334 | orchestrator | 06:19:54.237 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:54.237380 | orchestrator | 06:19:54.237 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:54.237399 | orchestrator | 06:19:54.237 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:54.237413 | orchestrator | 06:19:54.237 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:54.237465 | orchestrator | 06:19:54.237 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.237483 | orchestrator | 06:19:54.237 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:54.237572 | orchestrator | 06:19:54.237 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.237626 | orchestrator | 06:19:54.237 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.237631 | orchestrator | 06:19:54.237 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:54.237642 | orchestrator | 06:19:54.237 STDOUT terraform:  } 2025-09-19 06:19:54.237656 | orchestrator | 06:19:54.237 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.237660 | orchestrator | 06:19:54.237 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:54.237664 | orchestrator | 06:19:54.237 STDOUT terraform:  } 2025-09-19 06:19:54.237684 | orchestrator | 06:19:54.237 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.237696 | orchestrator | 06:19:54.237 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:54.237706 | orchestrator | 06:19:54.237 STDOUT terraform:  } 2025-09-19 06:19:54.237720 | orchestrator | 06:19:54.237 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.237725 | orchestrator | 06:19:54.237 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:54.237729 | orchestrator | 06:19:54.237 STDOUT terraform:  } 2025-09-19 06:19:54.237754 | orchestrator | 06:19:54.237 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:54.237759 | orchestrator | 06:19:54.237 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:54.237784 | orchestrator | 06:19:54.237 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-19 06:19:54.237797 | orchestrator | 06:19:54.237 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:54.237802 | orchestrator | 06:19:54.237 STDOUT terraform:  } 2025-09-19 06:19:54.237831 | orchestrator | 06:19:54.237 STDOUT terraform:  } 2025-09-19 06:19:54.237857 | orchestrator | 06:19:54.237 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-19 06:19:54.237919 | orchestrator | 06:19:54.237 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:54.237967 | orchestrator | 06:19:54.237 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:54.237991 | orchestrator | 06:19:54.237 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:54.238051 | orchestrator | 06:19:54.237 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:54.238155 | orchestrator | 06:19:54.238 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.238160 | orchestrator | 06:19:54.238 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:54.238172 | orchestrator | 06:19:54.238 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:54.238178 | orchestrator | 06:19:54.238 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:54.238231 | orchestrator | 06:19:54.238 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:54.238290 | orchestrator | 06:19:54.238 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.238363 | orchestrator | 06:19:54.238 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:54.238377 | orchestrator | 06:19:54.238 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:54.238413 | orchestrator | 06:19:54.238 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:54.238488 | orchestrator | 06:19:54.238 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:54.238502 | orchestrator | 06:19:54.238 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.238517 | orchestrator | 06:19:54.238 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:54.238535 | orchestrator | 06:19:54.238 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.238549 | orchestrator | 06:19:54.238 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.238576 | orchestrator | 06:19:54.238 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:54.238622 | orchestrator | 06:19:54.238 STDOUT terraform:  } 2025-09-19 06:19:54.238634 | orchestrator | 06:19:54.238 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.238639 | orchestrator | 06:19:54.238 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:54.238643 | orchestrator | 06:19:54.238 STDOUT terraform:  } 2025-09-19 06:19:54.238654 | orchestrator | 06:19:54.238 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.238666 | orchestrator | 06:19:54.238 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:54.238682 | orchestrator | 06:19:54.238 STDOUT terraform:  } 2025-09-19 06:19:54.238701 | orchestrator | 06:19:54.238 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.238707 | orchestrator | 06:19:54.238 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:54.238711 | orchestrator | 06:19:54.238 STDOUT terraform:  } 2025-09-19 06:19:54.238715 | orchestrator | 06:19:54.238 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:54.238719 | orchestrator | 06:19:54.238 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:54.238722 | orchestrator | 06:19:54.238 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-19 06:19:54.238756 | orchestrator | 06:19:54.238 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:54.238768 | orchestrator | 06:19:54.238 STDOUT terraform:  } 2025-09-19 06:19:54.238788 | orchestrator | 06:19:54.238 STDOUT terraform:  } 2025-09-19 06:19:54.238810 | orchestrator | 06:19:54.238 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-19 06:19:54.238815 | orchestrator | 06:19:54.238 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:54.238882 | orchestrator | 06:19:54.238 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:54.238921 | orchestrator | 06:19:54.238 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:54.238940 | orchestrator | 06:19:54.238 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:54.238945 | orchestrator | 06:19:54.238 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.238985 | orchestrator | 06:19:54.238 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:54.239022 | orchestrator | 06:19:54.238 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:54.239048 | orchestrator | 06:19:54.239 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:54.239087 | orchestrator | 06:19:54.239 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:54.239123 | orchestrator | 06:19:54.239 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.239157 | orchestrator | 06:19:54.239 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:54.239198 | orchestrator | 06:19:54.239 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:54.239236 | orchestrator | 06:19:54.239 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:54.239243 | orchestrator | 06:19:54.239 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:54.239289 | orchestrator | 06:19:54.239 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.239333 | orchestrator | 06:19:54.239 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:54.239370 | orchestrator | 06:19:54.239 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.239414 | orchestrator | 06:19:54.239 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.239437 | orchestrator | 06:19:54.239 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:54.239461 | orchestrator | 06:19:54.239 STDOUT terraform:  } 2025-09-19 06:19:54.239485 | orchestrator | 06:19:54.239 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.239507 | orchestrator | 06:19:54.239 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:54.239522 | orchestrator | 06:19:54.239 STDOUT terraform:  } 2025-09-19 06:19:54.239541 | orchestrator | 06:19:54.239 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.239545 | orchestrator | 06:19:54.239 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:54.239549 | orchestrator | 06:19:54.239 STDOUT terraform:  } 2025-09-19 06:19:54.239580 | orchestrator | 06:19:54.239 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.239594 | orchestrator | 06:19:54.239 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:54.239615 | orchestrator | 06:19:54.239 STDOUT terraform:  } 2025-09-19 06:19:54.239619 | orchestrator | 06:19:54.239 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:54.239623 | orchestrator | 06:19:54.239 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:54.239627 | orchestrator | 06:19:54.239 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-19 06:19:54.239632 | orchestrator | 06:19:54.239 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:54.239636 | orchestrator | 06:19:54.239 STDOUT terraform:  } 2025-09-19 06:19:54.239652 | orchestrator | 06:19:54.239 STDOUT terraform:  } 2025-09-19 06:19:54.239682 | orchestrator | 06:19:54.239 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-19 06:19:54.239716 | orchestrator | 06:19:54.239 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:54.239756 | orchestrator | 06:19:54.239 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:54.239793 | orchestrator | 06:19:54.239 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:54.239808 | orchestrator | 06:19:54.239 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:54.239862 | orchestrator | 06:19:54.239 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.239876 | orchestrator | 06:19:54.239 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:54.239937 | orchestrator | 06:19:54.239 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:54.239958 | orchestrator | 06:19:54.239 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:54.239999 | orchestrator | 06:19:54.239 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:54.240003 | orchestrator | 06:19:54.239 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.240063 | orchestrator | 06:19:54.239 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:54.240077 | orchestrator | 06:19:54.240 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:54.240131 | orchestrator | 06:19:54.240 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:54.240156 | orchestrator | 06:19:54.240 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:54.240185 | orchestrator | 06:19:54.240 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.240191 | orchestrator | 06:19:54.240 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:54.240223 | orchestrator | 06:19:54.240 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.240259 | orchestrator | 06:19:54.240 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.240290 | orchestrator | 06:19:54.240 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:54.240294 | orchestrator | 06:19:54.240 STDOUT terraform:  } 2025-09-19 06:19:54.240307 | orchestrator | 06:19:54.240 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.240324 | orchestrator | 06:19:54.240 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:54.240348 | orchestrator | 06:19:54.240 STDOUT terraform:  } 2025-09-19 06:19:54.240360 | orchestrator | 06:19:54.240 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.240397 | orchestrator | 06:19:54.240 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:54.240417 | orchestrator | 06:19:54.240 STDOUT terraform:  } 2025-09-19 06:19:54.240429 | orchestrator | 06:19:54.240 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.240442 | orchestrator | 06:19:54.240 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:54.240477 | orchestrator | 06:19:54.240 STDOUT terraform:  } 2025-09-19 06:19:54.240490 | orchestrator | 06:19:54.240 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:54.240494 | orchestrator | 06:19:54.240 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:54.240523 | orchestrator | 06:19:54.240 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-19 06:19:54.240528 | orchestrator | 06:19:54.240 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:54.240532 | orchestrator | 06:19:54.240 STDOUT terraform:  } 2025-09-19 06:19:54.240555 | orchestrator | 06:19:54.240 STDOUT terraform:  } 2025-09-19 06:19:54.240575 | orchestrator | 06:19:54.240 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-19 06:19:54.240596 | orchestrator | 06:19:54.240 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:54.240665 | orchestrator | 06:19:54.240 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:54.240704 | orchestrator | 06:19:54.240 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:54.240741 | orchestrator | 06:19:54.240 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:54.240778 | orchestrator | 06:19:54.240 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.240819 | orchestrator | 06:19:54.240 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:54.240943 | orchestrator | 06:19:54.240 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:54.240957 | orchestrator | 06:19:54.240 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:54.241011 | orchestrator | 06:19:54.240 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:54.241050 | orchestrator | 06:19:54.240 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.241062 | orchestrator | 06:19:54.240 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:54.241158 | orchestrator | 06:19:54.240 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:54.241183 | orchestrator | 06:19:54.241 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:54.241187 | orchestrator | 06:19:54.241 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:54.241191 | orchestrator | 06:19:54.241 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.241195 | orchestrator | 06:19:54.241 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:54.241223 | orchestrator | 06:19:54.241 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.241243 | orchestrator | 06:19:54.241 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.241266 | orchestrator | 06:19:54.241 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:54.241287 | orchestrator | 06:19:54.241 STDOUT terraform:  } 2025-09-19 06:19:54.241314 | orchestrator | 06:19:54.241 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.241319 | orchestrator | 06:19:54.241 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:54.241323 | orchestrator | 06:19:54.241 STDOUT terraform:  } 2025-09-19 06:19:54.241344 | orchestrator | 06:19:54.241 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.241356 | orchestrator | 06:19:54.241 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:54.241368 | orchestrator | 06:19:54.241 STDOUT terraform:  } 2025-09-19 06:19:54.241382 | orchestrator | 06:19:54.241 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:54.241403 | orchestrator | 06:19:54.241 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:54.241422 | orchestrator | 06:19:54.241 STDOUT terraform:  } 2025-09-19 06:19:54.241428 | orchestrator | 06:19:54.241 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:54.241463 | orchestrator | 06:19:54.241 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:54.241468 | orchestrator | 06:19:54.241 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-19 06:19:54.241557 | orchestrator | 06:19:54.241 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:54.241592 | orchestrator | 06:19:54.241 STDOUT terraform:  } 2025-09-19 06:19:54.241622 | orchestrator | 06:19:54.241 STDOUT terraform:  } 2025-09-19 06:19:54.241633 | orchestrator | 06:19:54.241 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-19 06:19:54.241662 | orchestrator | 06:19:54.241 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-19 06:19:54.241683 | orchestrator | 06:19:54.241 STDOUT terraform:  + force_destroy = false 2025-09-19 06:19:54.241687 | orchestrator | 06:19:54.241 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.241721 | orchestrator | 06:19:54.241 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 06:19:54.241748 | orchestrator | 06:19:54.241 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.241753 | orchestrator | 06:19:54.241 STDOUT terraform:  + router_id = (known after apply) 2025-09-19 06:19:54.241757 | orchestrator | 06:19:54.241 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:54.241777 | orchestrator | 06:19:54.241 STDOUT terraform:  } 2025-09-19 06:19:54.241783 | orchestrator | 06:19:54.241 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-19 06:19:54.241830 | orchestrator | 06:19:54.241 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-19 06:19:54.241930 | orchestrator | 06:19:54.241 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:54.241945 | orchestrator | 06:19:54.241 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.241949 | orchestrator | 06:19:54.241 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 06:19:54.241956 | orchestrator | 06:19:54.241 STDOUT terraform:  2025-09-19 06:19:54.241996 | orchestrator | 06:19:54.241 STDOUT terraform:  + "nova", 2025-09-19 06:19:54.242008 | orchestrator | 06:19:54.241 STDOUT terraform:  ] 2025-09-19 06:19:54.242071 | orchestrator | 06:19:54.241 STDOUT terraform:  + distributed = (known after apply) 2025-09-19 06:19:54.242105 | orchestrator | 06:19:54.242 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-19 06:19:54.242175 | orchestrator | 06:19:54.242 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-19 06:19:54.242197 | orchestrator | 06:19:54.242 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-19 06:19:54.242217 | orchestrator | 06:19:54.242 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.242222 | orchestrator | 06:19:54.242 STDOUT terraform:  + name = "testbed" 2025-09-19 06:19:54.242262 | orchestrator | 06:19:54.242 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.242267 | orchestrator | 06:19:54.242 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.242292 | orchestrator | 06:19:54.242 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-19 06:19:54.242335 | orchestrator | 06:19:54.242 STDOUT terraform:  } 2025-09-19 06:19:54.242364 | orchestrator | 06:19:54.242 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-19 06:19:54.242406 | orchestrator | 06:19:54.242 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-19 06:19:54.242456 | orchestrator | 06:19:54.242 STDOUT terraform:  + description = "ssh" 2025-09-19 06:19:54.242522 | orchestrator | 06:19:54.242 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:54.242545 | orchestrator | 06:19:54.242 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:54.242664 | orchestrator | 06:19:54.242 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.242678 | orchestrator | 06:19:54.242 STDOUT terraform:  + port_range_max = 22 2025-09-19 06:19:54.242700 | orchestrator | 06:19:54.242 STDOUT terraform:  + port_range_min = 22 2025-09-19 06:19:54.242769 | orchestrator | 06:19:54.242 STDOUT terraform:  + protocol = "tcp" 2025-09-19 06:19:54.242781 | orchestrator | 06:19:54.242 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.242793 | orchestrator | 06:19:54.242 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:54.242797 | orchestrator | 06:19:54.242 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:54.242808 | orchestrator | 06:19:54.242 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:54.242829 | orchestrator | 06:19:54.242 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:54.242938 | orchestrator | 06:19:54.242 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.242951 | orchestrator | 06:19:54.242 STDOUT terraform:  } 2025-09-19 06:19:54.242963 | orchestrator | 06:19:54.242 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-19 06:19:54.242984 | orchestrator | 06:19:54.242 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-19 06:19:54.243024 | orchestrator | 06:19:54.242 STDOUT terraform:  + description = "wireguard" 2025-09-19 06:19:54.243030 | orchestrator | 06:19:54.242 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:54.243043 | orchestrator | 06:19:54.242 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:54.243056 | orchestrator | 06:19:54.242 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.243090 | orchestrator | 06:19:54.242 STDOUT terraform:  + port_range_max = 51820 2025-09-19 06:19:54.243102 | orchestrator | 06:19:54.242 STDOUT terraform:  + port_range_min = 51820 2025-09-19 06:19:54.243122 | orchestrator | 06:19:54.243 STDOUT terraform:  + protocol = "udp" 2025-09-19 06:19:54.243142 | orchestrator | 06:19:54.243 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.243163 | orchestrator | 06:19:54.243 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:54.243181 | orchestrator | 06:19:54.243 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:54.243202 | orchestrator | 06:19:54.243 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:54.243216 | orchestrator | 06:19:54.243 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:54.243228 | orchestrator | 06:19:54.243 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.243251 | orchestrator | 06:19:54.243 STDOUT terraform:  } 2025-09-19 06:19:54.243309 | orchestrator | 06:19:54.243 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-19 06:19:54.243363 | orchestrator | 06:19:54.243 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-19 06:19:54.243401 | orchestrator | 06:19:54.243 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:54.243406 | orchestrator | 06:19:54.243 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:54.243430 | orchestrator | 06:19:54.243 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.243463 | orchestrator | 06:19:54.243 STDOUT terraform:  + protocol = "tcp" 2025-09-19 06:19:54.243469 | orchestrator | 06:19:54.243 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.243510 | orchestrator | 06:19:54.243 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:54.243549 | orchestrator | 06:19:54.243 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:54.243572 | orchestrator | 06:19:54.243 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 06:19:54.243609 | orchestrator | 06:19:54.243 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:54.243637 | orchestrator | 06:19:54.243 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.243642 | orchestrator | 06:19:54.243 STDOUT terraform:  } 2025-09-19 06:19:54.243740 | orchestrator | 06:19:54.243 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-19 06:19:54.243771 | orchestrator | 06:19:54.243 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-19 06:19:54.243775 | orchestrator | 06:19:54.243 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:54.243781 | orchestrator | 06:19:54.243 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:54.243813 | orchestrator | 06:19:54.243 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.243887 | orchestrator | 06:19:54.243 STDOUT terraform:  + protocol = "udp" 2025-09-19 06:19:54.243910 | orchestrator | 06:19:54.243 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.243936 | orchestrator | 06:19:54.243 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:54.243977 | orchestrator | 06:19:54.243 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:54.243990 | orchestrator | 06:19:54.243 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 06:19:54.244004 | orchestrator | 06:19:54.243 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:54.244068 | orchestrator | 06:19:54.244 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.244116 | orchestrator | 06:19:54.244 STDOUT terraform:  } 2025-09-19 06:19:54.244149 | orchestrator | 06:19:54.244 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-19 06:19:54.244187 | orchestrator | 06:19:54.244 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-19 06:19:54.244205 | orchestrator | 06:19:54.244 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:54.244238 | orchestrator | 06:19:54.244 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:54.244249 | orchestrator | 06:19:54.244 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.244261 | orchestrator | 06:19:54.244 STDOUT terraform:  + protocol = "icmp" 2025-09-19 06:19:54.244294 | orchestrator | 06:19:54.244 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.244300 | orchestrator | 06:19:54.244 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:54.244334 | orchestrator | 06:19:54.244 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:54.244368 | orchestrator | 06:19:54.244 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:54.244436 | orchestrator | 06:19:54.244 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:54.244463 | orchestrator | 06:19:54.244 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.244467 | orchestrator | 06:19:54.244 STDOUT terraform:  } 2025-09-19 06:19:54.244481 | orchestrator | 06:19:54.244 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-19 06:19:54.244525 | orchestrator | 06:19:54.244 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-19 06:19:54.244568 | orchestrator | 06:19:54.244 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:54.244581 | orchestrator | 06:19:54.244 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:54.244594 | orchestrator | 06:19:54.244 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.244616 | orchestrator | 06:19:54.244 STDOUT terraform:  + protocol = "tcp" 2025-09-19 06:19:54.244668 | orchestrator | 06:19:54.244 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.244743 | orchestrator | 06:19:54.244 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:54.244894 | orchestrator | 06:19:54.244 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:54.244944 | orchestrator | 06:19:54.244 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:54.244957 | orchestrator | 06:19:54.244 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:54.244994 | orchestrator | 06:19:54.244 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.245005 | orchestrator | 06:19:54.244 STDOUT terraform:  } 2025-09-19 06:19:54.245017 | orchestrator | 06:19:54.244 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-19 06:19:54.245052 | orchestrator | 06:19:54.244 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-19 06:19:54.245070 | orchestrator | 06:19:54.244 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:54.245110 | orchestrator | 06:19:54.244 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:54.245116 | orchestrator | 06:19:54.245 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.245120 | orchestrator | 06:19:54.245 STDOUT terraform:  + protocol = "udp" 2025-09-19 06:19:54.245229 | orchestrator | 06:19:54.245 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.245244 | orchestrator | 06:19:54.245 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:54.245333 | orchestrator | 06:19:54.245 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:54.245338 | orchestrator | 06:19:54.245 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:54.245352 | orchestrator | 06:19:54.245 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:54.245409 | orchestrator | 06:19:54.245 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.245424 | orchestrator | 06:19:54.245 STDOUT terraform:  } 2025-09-19 06:19:54.245437 | orchestrator | 06:19:54.245 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-19 06:19:54.245529 | orchestrator | 06:19:54.245 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-19 06:19:54.245785 | orchestrator | 06:19:54.245 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:54.245903 | orchestrator | 06:19:54.245 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:54.245956 | orchestrator | 06:19:54.245 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.245961 | orchestrator | 06:19:54.245 STDOUT terraform:  + protocol = "icmp" 2025-09-19 06:19:54.245965 | orchestrator | 06:19:54.245 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.245977 | orchestrator | 06:19:54.245 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:54.245981 | orchestrator | 06:19:54.245 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:54.245985 | orchestrator | 06:19:54.245 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:54.245989 | orchestrator | 06:19:54.245 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:54.245993 | orchestrator | 06:19:54.245 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.245996 | orchestrator | 06:19:54.245 STDOUT terraform:  } 2025-09-19 06:19:54.246000 | orchestrator | 06:19:54.245 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-19 06:19:54.246004 | orchestrator | 06:19:54.245 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-19 06:19:54.246066 | orchestrator | 06:19:54.245 STDOUT terraform:  + description = "vrrp" 2025-09-19 06:19:54.246200 | orchestrator | 06:19:54.245 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:54.246269 | orchestrator | 06:19:54.245 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:54.246284 | orchestrator | 06:19:54.245 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.246367 | orchestrator | 06:19:54.246 STDOUT terraform:  + protocol = "112" 2025-09-19 06:19:54.246372 | orchestrator | 06:19:54.246 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.246376 | orchestrator | 06:19:54.246 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:54.246379 | orchestrator | 06:19:54.246 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:54.246383 | orchestrator | 06:19:54.246 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:54.246471 | orchestrator | 06:19:54.246 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:54.246497 | orchestrator | 06:19:54.246 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.246531 | orchestrator | 06:19:54.246 STDOUT terraform:  } 2025-09-19 06:19:54.256672 | orchestrator | 06:19:54.246 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-19 06:19:54.256761 | orchestrator | 06:19:54.256 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-19 06:19:54.256796 | orchestrator | 06:19:54.256 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.256877 | orchestrator | 06:19:54.256 STDOUT terraform:  + description = "management security group" 2025-09-19 06:19:54.256977 | orchestrator | 06:19:54.256 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.257002 | orchestrator | 06:19:54.256 STDOUT terraform:  + name = "testbed-management" 2025-09-19 06:19:54.257078 | orchestrator | 06:19:54.256 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.257105 | orchestrator | 06:19:54.257 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 06:19:54.257144 | orchestrator | 06:19:54.257 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.257151 | orchestrator | 06:19:54.257 STDOUT terraform:  } 2025-09-19 06:19:54.257208 | orchestrator | 06:19:54.257 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-19 06:19:54.257262 | orchestrator | 06:19:54.257 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-19 06:19:54.257293 | orchestrator | 06:19:54.257 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.257323 | orchestrator | 06:19:54.257 STDOUT terraform:  + description = "node security group" 2025-09-19 06:19:54.257353 | orchestrator | 06:19:54.257 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.257378 | orchestrator | 06:19:54.257 STDOUT terraform:  + name = "testbed-node" 2025-09-19 06:19:54.257409 | orchestrator | 06:19:54.257 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.257439 | orchestrator | 06:19:54.257 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 06:19:54.257467 | orchestrator | 06:19:54.257 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.257473 | orchestrator | 06:19:54.257 STDOUT terraform:  } 2025-09-19 06:19:54.257522 | orchestrator | 06:19:54.257 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-19 06:19:54.257567 | orchestrator | 06:19:54.257 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-19 06:19:54.257598 | orchestrator | 06:19:54.257 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:54.257626 | orchestrator | 06:19:54.257 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-19 06:19:54.257650 | orchestrator | 06:19:54.257 STDOUT terraform:  + dns_nameservers = [ 2025-09-19 06:19:54.257667 | orchestrator | 06:19:54.257 STDOUT terraform:  + "8.8.8.8", 2025-09-19 06:19:54.257682 | orchestrator | 06:19:54.257 STDOUT terraform:  + "9.9.9.9", 2025-09-19 06:19:54.257688 | orchestrator | 06:19:54.257 STDOUT terraform:  ] 2025-09-19 06:19:54.257712 | orchestrator | 06:19:54.257 STDOUT terraform:  + enable_dhcp = true 2025-09-19 06:19:54.257742 | orchestrator | 06:19:54.257 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-19 06:19:54.257773 | orchestrator | 06:19:54.257 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.257792 | orchestrator | 06:19:54.257 STDOUT terraform:  + ip_version = 4 2025-09-19 06:19:54.257822 | orchestrator | 06:19:54.257 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-19 06:19:54.257867 | orchestrator | 06:19:54.257 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-19 06:19:54.257903 | orchestrator | 06:19:54.257 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-19 06:19:54.257932 | orchestrator | 06:19:54.257 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:54.257952 | orchestrator | 06:19:54.257 STDOUT terraform:  + no_gateway = false 2025-09-19 06:19:54.258011 | orchestrator | 06:19:54.257 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:54.258032 | orchestrator | 06:19:54.257 STDOUT terraform:  + service_types = (known after apply) 2025-09-19 06:19:54.258059 | orchestrator | 06:19:54.258 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:54.258076 | orchestrator | 06:19:54.258 STDOUT terraform:  + allocation_pool { 2025-09-19 06:19:54.258101 | orchestrator | 06:19:54.258 STDOUT terraform:  + end = "192.168.31.250" 2025-09-19 06:19:54.258124 | orchestrator | 06:19:54.258 STDOUT terraform:  + start = "192.168.31.200" 2025-09-19 06:19:54.258130 | orchestrator | 06:19:54.258 STDOUT terraform:  } 2025-09-19 06:19:54.258147 | orchestrator | 06:19:54.258 STDOUT terraform:  } 2025-09-19 06:19:54.258170 | orchestrator | 06:19:54.258 STDOUT terraform:  # terraform_data.image will be created 2025-09-19 06:19:54.258195 | orchestrator | 06:19:54.258 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-19 06:19:54.258219 | orchestrator | 06:19:54.258 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.258236 | orchestrator | 06:19:54.258 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 06:19:54.258259 | orchestrator | 06:19:54.258 STDOUT terraform:  + output = (known after apply) 2025-09-19 06:19:54.258265 | orchestrator | 06:19:54.258 STDOUT terraform:  } 2025-09-19 06:19:54.258301 | orchestrator | 06:19:54.258 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-19 06:19:54.258326 | orchestrator | 06:19:54.258 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-19 06:19:54.258351 | orchestrator | 06:19:54.258 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:54.258373 | orchestrator | 06:19:54.258 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 06:19:54.258396 | orchestrator | 06:19:54.258 STDOUT terraform:  + output = (known after apply) 2025-09-19 06:19:54.258402 | orchestrator | 06:19:54.258 STDOUT terraform:  } 2025-09-19 06:19:54.258435 | orchestrator | 06:19:54.258 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-19 06:19:54.258448 | orchestrator | 06:19:54.258 STDOUT terraform: Changes to Outputs: 2025-09-19 06:19:54.258465 | orchestrator | 06:19:54.258 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-19 06:19:54.258488 | orchestrator | 06:19:54.258 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 06:19:54.421915 | orchestrator | 06:19:54.421 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-19 06:19:54.421973 | orchestrator | 06:19:54.421 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=714933b3-3d9c-8f9b-426b-968ddf76d018] 2025-09-19 06:19:54.421980 | orchestrator | 06:19:54.421 STDOUT terraform: terraform_data.image: Creating... 2025-09-19 06:19:54.423003 | orchestrator | 06:19:54.422 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=141635ac-8a52-ab08-6a7e-bcce8234b4f2] 2025-09-19 06:19:54.441976 | orchestrator | 06:19:54.441 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-19 06:19:54.442052 | orchestrator | 06:19:54.441 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-19 06:19:54.452004 | orchestrator | 06:19:54.451 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-19 06:19:54.458815 | orchestrator | 06:19:54.458 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-19 06:19:54.459206 | orchestrator | 06:19:54.459 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-19 06:19:54.459514 | orchestrator | 06:19:54.459 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-19 06:19:54.459601 | orchestrator | 06:19:54.459 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-19 06:19:54.459634 | orchestrator | 06:19:54.459 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-19 06:19:54.459655 | orchestrator | 06:19:54.459 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-19 06:19:54.459660 | orchestrator | 06:19:54.459 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-19 06:19:54.893068 | orchestrator | 06:19:54.892 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 06:19:54.898968 | orchestrator | 06:19:54.898 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 06:19:54.906725 | orchestrator | 06:19:54.905 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-19 06:19:54.906809 | orchestrator | 06:19:54.906 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-19 06:19:54.945424 | orchestrator | 06:19:54.945 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-19 06:19:54.959225 | orchestrator | 06:19:54.959 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-19 06:19:55.393911 | orchestrator | 06:19:55.393 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=4933482e-f42a-4fca-90ba-a5bb6c832666] 2025-09-19 06:19:55.406061 | orchestrator | 06:19:55.404 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-19 06:19:58.083719 | orchestrator | 06:19:58.083 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=1117915d-c4ec-4d47-9877-c3f2a311bdd8] 2025-09-19 06:19:58.091900 | orchestrator | 06:19:58.091 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=68d7532d-29ea-4f3d-b7b6-675f70301c39] 2025-09-19 06:19:58.092697 | orchestrator | 06:19:58.092 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-19 06:19:58.101292 | orchestrator | 06:19:58.101 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-19 06:19:58.108047 | orchestrator | 06:19:58.107 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=c8e79e65-71f7-4ae8-8fa4-6c07ef757528] 2025-09-19 06:19:58.116509 | orchestrator | 06:19:58.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-19 06:19:58.124880 | orchestrator | 06:19:58.124 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=af8571bd-f20f-46c1-9b84-53d29d179301] 2025-09-19 06:19:58.128275 | orchestrator | 06:19:58.128 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=0ec87ec4-de78-4354-a913-8c3da733e508] 2025-09-19 06:19:58.132389 | orchestrator | 06:19:58.132 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-19 06:19:58.137557 | orchestrator | 06:19:58.135 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-19 06:19:58.138746 | orchestrator | 06:19:58.138 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=a2591162-fd7d-4f7c-a24f-a875e0bfaf5c] 2025-09-19 06:19:58.142199 | orchestrator | 06:19:58.142 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-19 06:19:58.188141 | orchestrator | 06:19:58.187 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=1f9d1cec-7d6c-4c71-8749-cd7e53c954b2] 2025-09-19 06:19:58.193581 | orchestrator | 06:19:58.193 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=f326ea53-fd8a-4d1e-8637-ed74e9f7229b] 2025-09-19 06:19:58.203917 | orchestrator | 06:19:58.203 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-19 06:19:58.204818 | orchestrator | 06:19:58.204 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-19 06:19:58.216409 | orchestrator | 06:19:58.216 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=5669ec6de35bae93a7be3bc02a0c30750c8397ab] 2025-09-19 06:19:58.216737 | orchestrator | 06:19:58.216 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=1ca04fa03c27a9d27ebc62eb0e7f4181954b70b2] 2025-09-19 06:19:58.228219 | orchestrator | 06:19:58.228 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=9b35f7c3-f4ee-4f20-a638-8acbecbf2b97] 2025-09-19 06:19:58.230269 | orchestrator | 06:19:58.230 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-19 06:19:58.753035 | orchestrator | 06:19:58.752 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=726e57ca-e096-4490-b127-344864fa14b3] 2025-09-19 06:19:59.111389 | orchestrator | 06:19:59.111 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=680d838e-bca4-4120-8c23-407b2d5f1415] 2025-09-19 06:19:59.119260 | orchestrator | 06:19:59.119 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-19 06:20:01.495646 | orchestrator | 06:20:01.495 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=24b63ec3-2727-4f55-a7d9-4b9cf8404670] 2025-09-19 06:20:01.540891 | orchestrator | 06:20:01.540 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a] 2025-09-19 06:20:01.557455 | orchestrator | 06:20:01.557 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=fa7bcb17-5b80-45db-868e-e545200cc85f] 2025-09-19 06:20:01.580079 | orchestrator | 06:20:01.579 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=89cbc581-c97b-43be-9e42-34404cedab71] 2025-09-19 06:20:01.591903 | orchestrator | 06:20:01.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=09d1dc7c-0142-46b7-bfb8-c4846e18939d] 2025-09-19 06:20:01.596413 | orchestrator | 06:20:01.596 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=19b572c9-891c-4fc6-a34f-184d2479a4fd] 2025-09-19 06:20:01.605549 | orchestrator | 06:20:01.605 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=a0d393fa-94d5-49f8-a4db-2b6e7ebce4a7] 2025-09-19 06:20:01.611502 | orchestrator | 06:20:01.611 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-19 06:20:01.611573 | orchestrator | 06:20:01.611 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-19 06:20:01.614480 | orchestrator | 06:20:01.614 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-19 06:20:02.332535 | orchestrator | 06:20:02.332 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=8957c906-c53e-4c90-a6f5-8ee47d709324] 2025-09-19 06:20:02.349689 | orchestrator | 06:20:02.349 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-19 06:20:02.350968 | orchestrator | 06:20:02.350 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-19 06:20:02.353310 | orchestrator | 06:20:02.353 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-19 06:20:02.355548 | orchestrator | 06:20:02.355 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-19 06:20:02.355832 | orchestrator | 06:20:02.355 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-19 06:20:02.360798 | orchestrator | 06:20:02.360 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-19 06:20:02.364872 | orchestrator | 06:20:02.364 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-19 06:20:02.365677 | orchestrator | 06:20:02.365 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-19 06:20:02.410931 | orchestrator | 06:20:02.408 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=8bc74bc8-198f-432a-941d-447f31bc6809] 2025-09-19 06:20:02.416485 | orchestrator | 06:20:02.416 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-19 06:20:02.734793 | orchestrator | 06:20:02.734 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=067b8e83-01a6-485d-abf7-88d9e1867b84] 2025-09-19 06:20:02.748363 | orchestrator | 06:20:02.748 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-19 06:20:03.026249 | orchestrator | 06:20:03.025 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=f5a7e1a7-a77f-4ef9-b0d9-8348e8942dff] 2025-09-19 06:20:03.033721 | orchestrator | 06:20:03.033 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-19 06:20:03.037552 | orchestrator | 06:20:03.037 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=f9c4231f-6fa8-4fa5-bf3f-f53dfc46e195] 2025-09-19 06:20:03.048163 | orchestrator | 06:20:03.047 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-19 06:20:03.051163 | orchestrator | 06:20:03.051 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=00fa1cb8-449f-4197-8bed-d2a8f258440a] 2025-09-19 06:20:03.056725 | orchestrator | 06:20:03.056 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-19 06:20:03.289722 | orchestrator | 06:20:03.289 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=9b30559c-2de8-4c56-971d-5ee3dae50d36] 2025-09-19 06:20:03.296157 | orchestrator | 06:20:03.295 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-19 06:20:03.466323 | orchestrator | 06:20:03.465 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=dbee379b-8c21-462b-8a84-cf54d86409b2] 2025-09-19 06:20:03.474515 | orchestrator | 06:20:03.474 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=b8cdcc13-c04c-49a7-a6e0-ebb04a13a240] 2025-09-19 06:20:03.478979 | orchestrator | 06:20:03.478 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-19 06:20:03.480331 | orchestrator | 06:20:03.480 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-19 06:20:03.519302 | orchestrator | 06:20:03.518 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=65ac5ee0-d35d-49d7-900c-4033da22ffec] 2025-09-19 06:20:03.610341 | orchestrator | 06:20:03.609 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=ad5d73cb-b5a6-450f-8683-adc422ff35b6] 2025-09-19 06:20:03.736822 | orchestrator | 06:20:03.736 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=06153d74-f160-423a-9f02-dad5a087a5f3] 2025-09-19 06:20:03.786591 | orchestrator | 06:20:03.786 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=c9f29102-eec2-4d59-96b9-ec78f2aa81f4] 2025-09-19 06:20:03.821032 | orchestrator | 06:20:03.820 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=9e434be4-cb9d-4422-af14-b5cb2fea5131] 2025-09-19 06:20:03.903676 | orchestrator | 06:20:03.903 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=80c80fc0-d98e-42ca-8819-26d5b178ffaf] 2025-09-19 06:20:03.958743 | orchestrator | 06:20:03.958 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=f3872cc6-53df-4d76-8ec8-7b9a9921c797] 2025-09-19 06:20:04.078084 | orchestrator | 06:20:04.077 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=5efaf364-4cf2-40e3-b406-4cbe2d7b5486] 2025-09-19 06:20:04.493581 | orchestrator | 06:20:04.493 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=570fcc7b-19a1-431e-b080-128e1c1b9d4d] 2025-09-19 06:20:04.638324 | orchestrator | 06:20:04.637 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=5745e6ef-8f32-424c-9e6c-4f1e38e4c600] 2025-09-19 06:20:04.657603 | orchestrator | 06:20:04.657 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-19 06:20:04.698262 | orchestrator | 06:20:04.698 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-19 06:20:04.698325 | orchestrator | 06:20:04.698 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-19 06:20:04.710527 | orchestrator | 06:20:04.709 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-19 06:20:04.710581 | orchestrator | 06:20:04.710 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-19 06:20:04.721657 | orchestrator | 06:20:04.712 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-19 06:20:04.723999 | orchestrator | 06:20:04.723 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-19 06:20:06.345304 | orchestrator | 06:20:06.344 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=048809e7-6cf7-4302-b4a2-4af127c2bb8a] 2025-09-19 06:20:06.353301 | orchestrator | 06:20:06.353 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-19 06:20:06.364344 | orchestrator | 06:20:06.364 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-19 06:20:06.364644 | orchestrator | 06:20:06.364 STDOUT terraform: local_file.inventory: Creating... 2025-09-19 06:20:06.370156 | orchestrator | 06:20:06.369 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=6ae1873cd2b4402ce3a15bf7c4d6f3c626012eb5] 2025-09-19 06:20:06.371312 | orchestrator | 06:20:06.371 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fb811571449d967db2f53c254ff13f10bc45a754] 2025-09-19 06:20:07.080106 | orchestrator | 06:20:07.079 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=048809e7-6cf7-4302-b4a2-4af127c2bb8a] 2025-09-19 06:20:14.701745 | orchestrator | 06:20:14.701 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-19 06:20:14.707298 | orchestrator | 06:20:14.707 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-19 06:20:14.711604 | orchestrator | 06:20:14.711 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-19 06:20:14.712681 | orchestrator | 06:20:14.712 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-19 06:20:14.712740 | orchestrator | 06:20:14.712 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-19 06:20:14.730122 | orchestrator | 06:20:14.729 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-19 06:20:24.703794 | orchestrator | 06:20:24.703 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-19 06:20:24.707904 | orchestrator | 06:20:24.707 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-19 06:20:24.712209 | orchestrator | 06:20:24.711 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-19 06:20:24.713619 | orchestrator | 06:20:24.713 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-19 06:20:24.713746 | orchestrator | 06:20:24.713 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-19 06:20:24.730991 | orchestrator | 06:20:24.730 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-19 06:20:25.358762 | orchestrator | 06:20:25.358 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=ed251974-1b7e-4529-83cb-0b78461b3b18] 2025-09-19 06:20:34.704145 | orchestrator | 06:20:34.703 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-19 06:20:34.713443 | orchestrator | 06:20:34.713 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-19 06:20:34.714559 | orchestrator | 06:20:34.714 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-19 06:20:34.714646 | orchestrator | 06:20:34.714 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-19 06:20:34.732049 | orchestrator | 06:20:34.731 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-19 06:20:35.349784 | orchestrator | 06:20:35.349 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=bc012b53-03f7-413e-adfe-f9c93cfebfbf] 2025-09-19 06:20:35.619876 | orchestrator | 06:20:35.619 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=da754696-4cbc-485c-b801-ccf47d37d5db] 2025-09-19 06:20:44.704331 | orchestrator | 06:20:44.704 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-09-19 06:20:44.715385 | orchestrator | 06:20:44.715 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-09-19 06:20:44.715467 | orchestrator | 06:20:44.715 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-09-19 06:20:45.571009 | orchestrator | 06:20:45.570 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=114657fb-ae11-4dad-a0c5-74e513c120e5] 2025-09-19 06:20:46.033954 | orchestrator | 06:20:46.033 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=81a8f3bf-a4fc-4879-a8d2-a4df2794da65] 2025-09-19 06:20:46.062511 | orchestrator | 06:20:46.062 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=d2043749-4647-4b54-9e5e-6b2eb3d730b0] 2025-09-19 06:20:46.078180 | orchestrator | 06:20:46.078 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-19 06:20:46.080283 | orchestrator | 06:20:46.080 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3631647845133056422] 2025-09-19 06:20:46.090974 | orchestrator | 06:20:46.090 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-19 06:20:46.091116 | orchestrator | 06:20:46.091 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-19 06:20:46.092695 | orchestrator | 06:20:46.092 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-19 06:20:46.092721 | orchestrator | 06:20:46.092 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-19 06:20:46.093284 | orchestrator | 06:20:46.093 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-19 06:20:46.093680 | orchestrator | 06:20:46.093 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-19 06:20:46.095527 | orchestrator | 06:20:46.095 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-19 06:20:46.113666 | orchestrator | 06:20:46.113 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-19 06:20:46.119365 | orchestrator | 06:20:46.119 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-19 06:20:46.120275 | orchestrator | 06:20:46.120 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-19 06:20:49.487912 | orchestrator | 06:20:49.487 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=da754696-4cbc-485c-b801-ccf47d37d5db/c8e79e65-71f7-4ae8-8fa4-6c07ef757528] 2025-09-19 06:20:49.519369 | orchestrator | 06:20:49.518 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=d2043749-4647-4b54-9e5e-6b2eb3d730b0/f326ea53-fd8a-4d1e-8637-ed74e9f7229b] 2025-09-19 06:20:49.540202 | orchestrator | 06:20:49.539 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=ed251974-1b7e-4529-83cb-0b78461b3b18/af8571bd-f20f-46c1-9b84-53d29d179301] 2025-09-19 06:20:49.570315 | orchestrator | 06:20:49.569 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=da754696-4cbc-485c-b801-ccf47d37d5db/68d7532d-29ea-4f3d-b7b6-675f70301c39] 2025-09-19 06:20:49.572395 | orchestrator | 06:20:49.572 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=ed251974-1b7e-4529-83cb-0b78461b3b18/1117915d-c4ec-4d47-9877-c3f2a311bdd8] 2025-09-19 06:20:49.593906 | orchestrator | 06:20:49.593 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=d2043749-4647-4b54-9e5e-6b2eb3d730b0/0ec87ec4-de78-4354-a913-8c3da733e508] 2025-09-19 06:20:55.670616 | orchestrator | 06:20:55.669 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=da754696-4cbc-485c-b801-ccf47d37d5db/1f9d1cec-7d6c-4c71-8749-cd7e53c954b2] 2025-09-19 06:20:55.710007 | orchestrator | 06:20:55.709 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=d2043749-4647-4b54-9e5e-6b2eb3d730b0/9b35f7c3-f4ee-4f20-a638-8acbecbf2b97] 2025-09-19 06:20:55.907458 | orchestrator | 06:20:55.907 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=ed251974-1b7e-4529-83cb-0b78461b3b18/a2591162-fd7d-4f7c-a24f-a875e0bfaf5c] 2025-09-19 06:20:56.125264 | orchestrator | 06:20:56.125 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-19 06:21:06.125570 | orchestrator | 06:21:06.125 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-19 06:21:06.483285 | orchestrator | 06:21:06.482 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=a9481df6-cdf4-4797-b299-39e85974fe5e] 2025-09-19 06:21:06.509737 | orchestrator | 06:21:06.509 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-19 06:21:06.509874 | orchestrator | 06:21:06.509 STDOUT terraform: Outputs: 2025-09-19 06:21:06.509901 | orchestrator | 06:21:06.509 STDOUT terraform: manager_address = 2025-09-19 06:21:06.509914 | orchestrator | 06:21:06.509 STDOUT terraform: private_key = 2025-09-19 06:21:06.780112 | orchestrator | ok: Runtime: 0:01:18.396243 2025-09-19 06:21:06.816300 | 2025-09-19 06:21:06.816420 | TASK [Fetch manager address] 2025-09-19 06:21:07.244162 | orchestrator | ok 2025-09-19 06:21:07.251447 | 2025-09-19 06:21:07.251563 | TASK [Set manager_host address] 2025-09-19 06:21:07.313962 | orchestrator | ok 2025-09-19 06:21:07.320804 | 2025-09-19 06:21:07.320911 | LOOP [Update ansible collections] 2025-09-19 06:21:08.472544 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 06:21:08.473080 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 06:21:08.473147 | orchestrator | Starting galaxy collection install process 2025-09-19 06:21:08.473193 | orchestrator | Process install dependency map 2025-09-19 06:21:08.473257 | orchestrator | Starting collection install process 2025-09-19 06:21:08.473295 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-09-19 06:21:08.473341 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-09-19 06:21:08.473386 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-19 06:21:08.473478 | orchestrator | ok: Item: commons Runtime: 0:00:00.837371 2025-09-19 06:21:09.294380 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 06:21:09.294502 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 06:21:09.294532 | orchestrator | Starting galaxy collection install process 2025-09-19 06:21:09.294555 | orchestrator | Process install dependency map 2025-09-19 06:21:09.294575 | orchestrator | Starting collection install process 2025-09-19 06:21:09.294595 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-09-19 06:21:09.294615 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-09-19 06:21:09.294634 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-19 06:21:09.294700 | orchestrator | ok: Item: services Runtime: 0:00:00.581935 2025-09-19 06:21:09.323349 | 2025-09-19 06:21:09.323500 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 06:21:19.865304 | orchestrator | ok 2025-09-19 06:21:19.874060 | 2025-09-19 06:21:19.874168 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 06:22:19.914348 | orchestrator | ok 2025-09-19 06:22:19.924127 | 2025-09-19 06:22:19.924251 | TASK [Fetch manager ssh hostkey] 2025-09-19 06:22:21.494152 | orchestrator | Output suppressed because no_log was given 2025-09-19 06:22:21.510324 | 2025-09-19 06:22:21.510492 | TASK [Get ssh keypair from terraform environment] 2025-09-19 06:22:22.046419 | orchestrator | ok: Runtime: 0:00:00.008489 2025-09-19 06:22:22.061604 | 2025-09-19 06:22:22.061758 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 06:22:22.107330 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-19 06:22:22.117335 | 2025-09-19 06:22:22.117454 | TASK [Run manager part 0] 2025-09-19 06:22:23.182930 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 06:22:23.227983 | orchestrator | 2025-09-19 06:22:23.228022 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-19 06:22:23.228029 | orchestrator | 2025-09-19 06:22:23.228041 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-19 06:22:24.812939 | orchestrator | ok: [testbed-manager] 2025-09-19 06:22:24.971141 | orchestrator | 2025-09-19 06:22:24.971260 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 06:22:24.971287 | orchestrator | 2025-09-19 06:22:24.971311 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:22:26.806689 | orchestrator | ok: [testbed-manager] 2025-09-19 06:22:26.806813 | orchestrator | 2025-09-19 06:22:26.806826 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 06:22:27.472794 | orchestrator | ok: [testbed-manager] 2025-09-19 06:22:27.472869 | orchestrator | 2025-09-19 06:22:27.472878 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 06:22:27.534489 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:27.534570 | orchestrator | 2025-09-19 06:22:27.534591 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-19 06:22:27.569253 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:27.569306 | orchestrator | 2025-09-19 06:22:27.569315 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 06:22:27.600230 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:27.600279 | orchestrator | 2025-09-19 06:22:27.600285 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 06:22:27.627447 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:27.627493 | orchestrator | 2025-09-19 06:22:27.627499 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 06:22:27.653040 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:27.653082 | orchestrator | 2025-09-19 06:22:27.653088 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-19 06:22:27.679851 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:27.679892 | orchestrator | 2025-09-19 06:22:27.679900 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-19 06:22:27.707501 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:27.707545 | orchestrator | 2025-09-19 06:22:27.707553 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-19 06:22:28.444480 | orchestrator | changed: [testbed-manager] 2025-09-19 06:22:28.444564 | orchestrator | 2025-09-19 06:22:28.444580 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-19 06:24:54.381351 | orchestrator | changed: [testbed-manager] 2025-09-19 06:24:54.381424 | orchestrator | 2025-09-19 06:24:54.381442 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 06:26:12.311240 | orchestrator | changed: [testbed-manager] 2025-09-19 06:26:12.311347 | orchestrator | 2025-09-19 06:26:12.311365 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 06:26:32.027771 | orchestrator | changed: [testbed-manager] 2025-09-19 06:26:32.027896 | orchestrator | 2025-09-19 06:26:32.027920 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 06:26:40.297182 | orchestrator | changed: [testbed-manager] 2025-09-19 06:26:40.297281 | orchestrator | 2025-09-19 06:26:40.297295 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 06:26:40.336911 | orchestrator | ok: [testbed-manager] 2025-09-19 06:26:40.336964 | orchestrator | 2025-09-19 06:26:40.336971 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-19 06:26:41.137573 | orchestrator | ok: [testbed-manager] 2025-09-19 06:26:41.137661 | orchestrator | 2025-09-19 06:26:41.137680 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-19 06:26:41.877921 | orchestrator | changed: [testbed-manager] 2025-09-19 06:26:41.878002 | orchestrator | 2025-09-19 06:26:41.878044 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-19 06:26:48.491697 | orchestrator | changed: [testbed-manager] 2025-09-19 06:26:48.491785 | orchestrator | 2025-09-19 06:26:48.491825 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-19 06:26:54.623820 | orchestrator | changed: [testbed-manager] 2025-09-19 06:26:54.623942 | orchestrator | 2025-09-19 06:26:54.623961 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-19 06:26:57.474415 | orchestrator | changed: [testbed-manager] 2025-09-19 06:26:57.474505 | orchestrator | 2025-09-19 06:26:57.474521 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-19 06:26:59.308727 | orchestrator | changed: [testbed-manager] 2025-09-19 06:26:59.308789 | orchestrator | 2025-09-19 06:26:59.308800 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-19 06:27:00.511324 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 06:27:00.511366 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 06:27:00.511373 | orchestrator | 2025-09-19 06:27:00.511379 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-19 06:27:00.554970 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 06:27:00.555049 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 06:27:00.555065 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 06:27:00.555078 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 06:27:04.291553 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 06:27:04.291596 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 06:27:04.291603 | orchestrator | 2025-09-19 06:27:04.291608 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-19 06:27:04.944008 | orchestrator | changed: [testbed-manager] 2025-09-19 06:27:04.944052 | orchestrator | 2025-09-19 06:27:04.944059 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-19 06:28:24.285989 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-19 06:28:24.286154 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-19 06:28:24.286178 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-19 06:28:24.286196 | orchestrator | 2025-09-19 06:28:24.286214 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-19 06:28:26.512113 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-19 06:28:26.512150 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-19 06:28:26.512155 | orchestrator | 2025-09-19 06:28:26.512160 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-19 06:28:26.512165 | orchestrator | 2025-09-19 06:28:26.512169 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:28:27.916385 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:27.916476 | orchestrator | 2025-09-19 06:28:27.916495 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 06:28:27.962946 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:27.963002 | orchestrator | 2025-09-19 06:28:27.963017 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 06:28:28.054587 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:28.054646 | orchestrator | 2025-09-19 06:28:28.054653 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 06:28:28.831993 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:28.832085 | orchestrator | 2025-09-19 06:28:28.832101 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 06:28:29.571685 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:29.571774 | orchestrator | 2025-09-19 06:28:29.571792 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 06:28:30.907180 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-19 06:28:30.907263 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-19 06:28:30.907279 | orchestrator | 2025-09-19 06:28:30.907304 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 06:28:32.235487 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:32.235618 | orchestrator | 2025-09-19 06:28:32.235644 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 06:28:33.953050 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:28:33.953127 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-19 06:28:33.953140 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:28:33.953151 | orchestrator | 2025-09-19 06:28:33.953162 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 06:28:34.012963 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:34.013027 | orchestrator | 2025-09-19 06:28:34.013034 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 06:28:34.568813 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:34.568910 | orchestrator | 2025-09-19 06:28:34.568921 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 06:28:34.638975 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:34.639018 | orchestrator | 2025-09-19 06:28:34.639027 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 06:28:35.462088 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:28:35.462216 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:35.462235 | orchestrator | 2025-09-19 06:28:35.462249 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 06:28:35.502329 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:35.502405 | orchestrator | 2025-09-19 06:28:35.502418 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 06:28:35.540268 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:35.540348 | orchestrator | 2025-09-19 06:28:35.540371 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 06:28:35.575171 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:35.575247 | orchestrator | 2025-09-19 06:28:35.575262 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 06:28:35.625593 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:35.625700 | orchestrator | 2025-09-19 06:28:35.625730 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 06:28:36.389170 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:36.389254 | orchestrator | 2025-09-19 06:28:36.389268 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 06:28:36.389280 | orchestrator | 2025-09-19 06:28:36.389291 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:28:37.859440 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:37.859526 | orchestrator | 2025-09-19 06:28:37.859541 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-19 06:28:38.842228 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:38.842309 | orchestrator | 2025-09-19 06:28:38.842324 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:28:38.842338 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-19 06:28:38.842349 | orchestrator | 2025-09-19 06:28:39.381395 | orchestrator | ok: Runtime: 0:06:16.540704 2025-09-19 06:28:39.399219 | 2025-09-19 06:28:39.399351 | TASK [Point out that the log in on the manager is now possible] 2025-09-19 06:28:39.450204 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-19 06:28:39.460412 | 2025-09-19 06:28:39.460556 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 06:28:39.494153 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-19 06:28:39.503697 | 2025-09-19 06:28:39.503814 | TASK [Run manager part 1 + 2] 2025-09-19 06:28:40.427359 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 06:28:40.490078 | orchestrator | 2025-09-19 06:28:40.490130 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-19 06:28:40.490137 | orchestrator | 2025-09-19 06:28:40.490151 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:28:43.353379 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:43.353431 | orchestrator | 2025-09-19 06:28:43.353450 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 06:28:43.389035 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:43.389090 | orchestrator | 2025-09-19 06:28:43.389101 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 06:28:43.432951 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:43.433009 | orchestrator | 2025-09-19 06:28:43.433024 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 06:28:43.484073 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:43.484129 | orchestrator | 2025-09-19 06:28:43.484139 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 06:28:43.551331 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:43.551380 | orchestrator | 2025-09-19 06:28:43.551390 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 06:28:43.611598 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:43.611653 | orchestrator | 2025-09-19 06:28:43.611664 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 06:28:43.659349 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-19 06:28:43.659394 | orchestrator | 2025-09-19 06:28:43.659400 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 06:28:44.370983 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:44.371040 | orchestrator | 2025-09-19 06:28:44.371050 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 06:28:44.423277 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:44.423343 | orchestrator | 2025-09-19 06:28:44.423358 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 06:28:45.768751 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:45.768897 | orchestrator | 2025-09-19 06:28:45.768931 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 06:28:46.308277 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:46.308358 | orchestrator | 2025-09-19 06:28:46.308373 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 06:28:47.387030 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:47.387173 | orchestrator | 2025-09-19 06:28:47.387188 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 06:29:05.440265 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:05.440321 | orchestrator | 2025-09-19 06:29:05.440327 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 06:29:06.106676 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:06.106725 | orchestrator | 2025-09-19 06:29:06.106732 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 06:29:06.160284 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:29:06.160305 | orchestrator | 2025-09-19 06:29:06.160309 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-19 06:29:07.052938 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:07.052977 | orchestrator | 2025-09-19 06:29:07.052985 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-19 06:29:07.901676 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:07.901714 | orchestrator | 2025-09-19 06:29:07.901721 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-19 06:29:08.440133 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:08.440165 | orchestrator | 2025-09-19 06:29:08.440172 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-19 06:29:08.481487 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 06:29:08.481543 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 06:29:08.481552 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 06:29:08.481558 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 06:29:10.573080 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:10.573163 | orchestrator | 2025-09-19 06:29:10.573180 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-19 06:29:19.426249 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-19 06:29:19.426298 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-19 06:29:19.426310 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-19 06:29:19.426320 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-19 06:29:19.426333 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-19 06:29:19.426341 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-19 06:29:19.426350 | orchestrator | 2025-09-19 06:29:19.426359 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-19 06:29:20.497570 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:20.497659 | orchestrator | 2025-09-19 06:29:20.497674 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-19 06:29:20.541162 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:29:20.541202 | orchestrator | 2025-09-19 06:29:20.541211 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-19 06:29:23.678722 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:23.678766 | orchestrator | 2025-09-19 06:29:23.678775 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-19 06:29:23.718458 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:29:23.718500 | orchestrator | 2025-09-19 06:29:23.718508 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-19 06:31:03.934710 | orchestrator | changed: [testbed-manager] 2025-09-19 06:31:03.934832 | orchestrator | 2025-09-19 06:31:03.934854 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 06:31:04.941452 | orchestrator | ok: [testbed-manager] 2025-09-19 06:31:04.941536 | orchestrator | 2025-09-19 06:31:04.941554 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:31:04.941568 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-19 06:31:04.941580 | orchestrator | 2025-09-19 06:31:05.128905 | orchestrator | ok: Runtime: 0:02:25.203477 2025-09-19 06:31:05.146908 | 2025-09-19 06:31:05.147066 | TASK [Reboot manager] 2025-09-19 06:31:06.682796 | orchestrator | ok: Runtime: 0:00:00.904068 2025-09-19 06:31:06.699512 | 2025-09-19 06:31:06.699731 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 06:31:21.332968 | orchestrator | ok 2025-09-19 06:31:21.343142 | 2025-09-19 06:31:21.343261 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 06:32:21.390933 | orchestrator | ok 2025-09-19 06:32:21.400831 | 2025-09-19 06:32:21.400965 | TASK [Deploy manager + bootstrap nodes] 2025-09-19 06:32:24.034819 | orchestrator | 2025-09-19 06:32:24.034980 | orchestrator | # DEPLOY MANAGER 2025-09-19 06:32:24.034994 | orchestrator | 2025-09-19 06:32:24.035003 | orchestrator | + set -e 2025-09-19 06:32:24.035012 | orchestrator | + echo 2025-09-19 06:32:24.035021 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-19 06:32:24.035032 | orchestrator | + echo 2025-09-19 06:32:24.035067 | orchestrator | + cat /opt/manager-vars.sh 2025-09-19 06:32:24.037804 | orchestrator | export NUMBER_OF_NODES=6 2025-09-19 06:32:24.037831 | orchestrator | 2025-09-19 06:32:24.037839 | orchestrator | export CEPH_VERSION=reef 2025-09-19 06:32:24.037848 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-19 06:32:24.037857 | orchestrator | export MANAGER_VERSION=9.2.0 2025-09-19 06:32:24.037874 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-19 06:32:24.037881 | orchestrator | 2025-09-19 06:32:24.037894 | orchestrator | export ARA=false 2025-09-19 06:32:24.037901 | orchestrator | export DEPLOY_MODE=manager 2025-09-19 06:32:24.037914 | orchestrator | export TEMPEST=false 2025-09-19 06:32:24.037921 | orchestrator | export IS_ZUUL=true 2025-09-19 06:32:24.037929 | orchestrator | 2025-09-19 06:32:24.037942 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 06:32:24.037950 | orchestrator | export EXTERNAL_API=false 2025-09-19 06:32:24.037957 | orchestrator | 2025-09-19 06:32:24.037965 | orchestrator | export IMAGE_USER=ubuntu 2025-09-19 06:32:24.037976 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-19 06:32:24.037983 | orchestrator | 2025-09-19 06:32:24.037991 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-19 06:32:24.038003 | orchestrator | 2025-09-19 06:32:24.038011 | orchestrator | + echo 2025-09-19 06:32:24.038072 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 06:32:24.039419 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 06:32:24.039458 | orchestrator | ++ INTERACTIVE=false 2025-09-19 06:32:24.039469 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 06:32:24.039478 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 06:32:24.039788 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 06:32:24.039833 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 06:32:24.039847 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 06:32:24.039856 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 06:32:24.039863 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 06:32:24.039871 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 06:32:24.039879 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 06:32:24.039893 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 06:32:24.039906 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 06:32:24.039919 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 06:32:24.039943 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 06:32:24.039951 | orchestrator | ++ export ARA=false 2025-09-19 06:32:24.039959 | orchestrator | ++ ARA=false 2025-09-19 06:32:24.039966 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 06:32:24.039977 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 06:32:24.039994 | orchestrator | ++ export TEMPEST=false 2025-09-19 06:32:24.040007 | orchestrator | ++ TEMPEST=false 2025-09-19 06:32:24.040019 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 06:32:24.040027 | orchestrator | ++ IS_ZUUL=true 2025-09-19 06:32:24.040034 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 06:32:24.040042 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 06:32:24.040049 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 06:32:24.040060 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 06:32:24.040073 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 06:32:24.040085 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 06:32:24.040102 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 06:32:24.040110 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 06:32:24.040215 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 06:32:24.040235 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 06:32:24.040358 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-19 06:32:24.099204 | orchestrator | + docker version 2025-09-19 06:32:24.384167 | orchestrator | Client: Docker Engine - Community 2025-09-19 06:32:24.384271 | orchestrator | Version: 27.5.1 2025-09-19 06:32:24.384289 | orchestrator | API version: 1.47 2025-09-19 06:32:24.384301 | orchestrator | Go version: go1.22.11 2025-09-19 06:32:24.384312 | orchestrator | Git commit: 9f9e405 2025-09-19 06:32:24.384324 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 06:32:24.384337 | orchestrator | OS/Arch: linux/amd64 2025-09-19 06:32:24.384348 | orchestrator | Context: default 2025-09-19 06:32:24.384359 | orchestrator | 2025-09-19 06:32:24.384371 | orchestrator | Server: Docker Engine - Community 2025-09-19 06:32:24.384382 | orchestrator | Engine: 2025-09-19 06:32:24.384394 | orchestrator | Version: 27.5.1 2025-09-19 06:32:24.384406 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-19 06:32:24.384447 | orchestrator | Go version: go1.22.11 2025-09-19 06:32:24.384459 | orchestrator | Git commit: 4c9b3b0 2025-09-19 06:32:24.384470 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 06:32:24.384482 | orchestrator | OS/Arch: linux/amd64 2025-09-19 06:32:24.384493 | orchestrator | Experimental: false 2025-09-19 06:32:24.384504 | orchestrator | containerd: 2025-09-19 06:32:24.384516 | orchestrator | Version: 1.7.27 2025-09-19 06:32:24.384527 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-19 06:32:24.384539 | orchestrator | runc: 2025-09-19 06:32:24.384550 | orchestrator | Version: 1.2.5 2025-09-19 06:32:24.384562 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-19 06:32:24.384573 | orchestrator | docker-init: 2025-09-19 06:32:24.384584 | orchestrator | Version: 0.19.0 2025-09-19 06:32:24.384596 | orchestrator | GitCommit: de40ad0 2025-09-19 06:32:24.388038 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-19 06:32:24.397374 | orchestrator | + set -e 2025-09-19 06:32:24.397408 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 06:32:24.397420 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 06:32:24.397431 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 06:32:24.397442 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 06:32:24.397452 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 06:32:24.397464 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 06:32:24.397475 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 06:32:24.397486 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 06:32:24.397497 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 06:32:24.397508 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 06:32:24.397519 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 06:32:24.397530 | orchestrator | ++ export ARA=false 2025-09-19 06:32:24.397541 | orchestrator | ++ ARA=false 2025-09-19 06:32:24.397552 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 06:32:24.397563 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 06:32:24.397576 | orchestrator | ++ export TEMPEST=false 2025-09-19 06:32:24.397595 | orchestrator | ++ TEMPEST=false 2025-09-19 06:32:24.397613 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 06:32:24.397630 | orchestrator | ++ IS_ZUUL=true 2025-09-19 06:32:24.397647 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 06:32:24.397666 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 06:32:24.397686 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 06:32:24.397704 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 06:32:24.397721 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 06:32:24.397732 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 06:32:24.397743 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 06:32:24.397754 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 06:32:24.397765 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 06:32:24.397776 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 06:32:24.397786 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 06:32:24.397826 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 06:32:24.397837 | orchestrator | ++ INTERACTIVE=false 2025-09-19 06:32:24.397848 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 06:32:24.397864 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 06:32:24.397875 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-19 06:32:24.397887 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-09-19 06:32:24.402474 | orchestrator | + set -e 2025-09-19 06:32:24.402502 | orchestrator | + VERSION=9.2.0 2025-09-19 06:32:24.402514 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-09-19 06:32:24.409771 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-19 06:32:24.409839 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-09-19 06:32:24.413482 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-09-19 06:32:24.416727 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-09-19 06:32:24.426733 | orchestrator | /opt/configuration ~ 2025-09-19 06:32:24.426763 | orchestrator | + set -e 2025-09-19 06:32:24.426775 | orchestrator | + pushd /opt/configuration 2025-09-19 06:32:24.426786 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 06:32:24.428580 | orchestrator | + source /opt/venv/bin/activate 2025-09-19 06:32:24.430276 | orchestrator | ++ deactivate nondestructive 2025-09-19 06:32:24.430296 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:24.430310 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:24.430344 | orchestrator | ++ hash -r 2025-09-19 06:32:24.430355 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:24.430366 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-19 06:32:24.430382 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-19 06:32:24.430394 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-19 06:32:24.430405 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-19 06:32:24.430416 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-19 06:32:24.430427 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-19 06:32:24.430438 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-19 06:32:24.430449 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 06:32:24.430496 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 06:32:24.430509 | orchestrator | ++ export PATH 2025-09-19 06:32:24.430562 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:24.430575 | orchestrator | ++ '[' -z '' ']' 2025-09-19 06:32:24.430586 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-19 06:32:24.430597 | orchestrator | ++ PS1='(venv) ' 2025-09-19 06:32:24.430608 | orchestrator | ++ export PS1 2025-09-19 06:32:24.430619 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-19 06:32:24.430630 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-19 06:32:24.430641 | orchestrator | ++ hash -r 2025-09-19 06:32:24.431100 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-09-19 06:32:25.638153 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-09-19 06:32:25.638994 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2025-09-19 06:32:25.640530 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-09-19 06:32:25.641938 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-09-19 06:32:25.643295 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-09-19 06:32:25.653512 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.0) 2025-09-19 06:32:25.655156 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-09-19 06:32:25.656439 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2025-09-19 06:32:25.657834 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-09-19 06:32:25.697188 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.3) 2025-09-19 06:32:25.698768 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-09-19 06:32:25.700578 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-09-19 06:32:25.701933 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.8.3) 2025-09-19 06:32:25.706244 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-09-19 06:32:25.929192 | orchestrator | ++ which gilt 2025-09-19 06:32:25.934099 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-09-19 06:32:25.934163 | orchestrator | + /opt/venv/bin/gilt overlay 2025-09-19 06:32:26.162563 | orchestrator | osism.cfg-generics: 2025-09-19 06:32:26.353196 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-09-19 06:32:26.353305 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-09-19 06:32:26.353332 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-09-19 06:32:26.353998 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-09-19 06:32:27.252297 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-09-19 06:32:27.264342 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-09-19 06:32:27.593047 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-09-19 06:32:27.660303 | orchestrator | ~ 2025-09-19 06:32:27.660400 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 06:32:27.660415 | orchestrator | + deactivate 2025-09-19 06:32:27.660426 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-19 06:32:27.660438 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 06:32:27.660448 | orchestrator | + export PATH 2025-09-19 06:32:27.660458 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-19 06:32:27.660468 | orchestrator | + '[' -n '' ']' 2025-09-19 06:32:27.660481 | orchestrator | + hash -r 2025-09-19 06:32:27.660491 | orchestrator | + '[' -n '' ']' 2025-09-19 06:32:27.660501 | orchestrator | + unset VIRTUAL_ENV 2025-09-19 06:32:27.660510 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-19 06:32:27.660520 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-19 06:32:27.660530 | orchestrator | + unset -f deactivate 2025-09-19 06:32:27.660540 | orchestrator | + popd 2025-09-19 06:32:27.661739 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-09-19 06:32:27.661760 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-19 06:32:27.662485 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-19 06:32:27.729929 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 06:32:27.730090 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-19 06:32:27.730107 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-19 06:32:27.828472 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 06:32:27.828581 | orchestrator | + source /opt/venv/bin/activate 2025-09-19 06:32:27.828596 | orchestrator | ++ deactivate nondestructive 2025-09-19 06:32:27.828608 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:27.828620 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:27.828631 | orchestrator | ++ hash -r 2025-09-19 06:32:27.828643 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:27.828654 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-19 06:32:27.828665 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-19 06:32:27.828676 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-19 06:32:27.828687 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-19 06:32:27.828698 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-19 06:32:27.828722 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-19 06:32:27.828734 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-19 06:32:27.828746 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 06:32:27.828758 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 06:32:27.828836 | orchestrator | ++ export PATH 2025-09-19 06:32:27.828850 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:27.828862 | orchestrator | ++ '[' -z '' ']' 2025-09-19 06:32:27.828873 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-19 06:32:27.828884 | orchestrator | ++ PS1='(venv) ' 2025-09-19 06:32:27.828895 | orchestrator | ++ export PS1 2025-09-19 06:32:27.828911 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-19 06:32:27.828923 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-19 06:32:27.828934 | orchestrator | ++ hash -r 2025-09-19 06:32:27.828946 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-19 06:32:28.992154 | orchestrator | 2025-09-19 06:32:28.992261 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-19 06:32:28.992278 | orchestrator | 2025-09-19 06:32:28.992290 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 06:32:29.570175 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:29.570275 | orchestrator | 2025-09-19 06:32:29.570291 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 06:32:30.619561 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:30.619674 | orchestrator | 2025-09-19 06:32:30.619692 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-19 06:32:30.619705 | orchestrator | 2025-09-19 06:32:30.619717 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:32:33.053062 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:33.053179 | orchestrator | 2025-09-19 06:32:33.053196 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-19 06:32:33.121445 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:33.121531 | orchestrator | 2025-09-19 06:32:33.121549 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-19 06:32:33.607390 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:33.607493 | orchestrator | 2025-09-19 06:32:33.607512 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-19 06:32:33.653263 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:32:33.653343 | orchestrator | 2025-09-19 06:32:33.653357 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 06:32:34.057625 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:34.057720 | orchestrator | 2025-09-19 06:32:34.057735 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-19 06:32:34.122407 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:32:34.122492 | orchestrator | 2025-09-19 06:32:34.122505 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-19 06:32:34.487365 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:34.487462 | orchestrator | 2025-09-19 06:32:34.487476 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-19 06:32:34.612475 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:32:34.612573 | orchestrator | 2025-09-19 06:32:34.612589 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-19 06:32:34.612602 | orchestrator | 2025-09-19 06:32:34.612613 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:32:36.436998 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:36.437103 | orchestrator | 2025-09-19 06:32:36.437119 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-19 06:32:36.547136 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-19 06:32:36.547230 | orchestrator | 2025-09-19 06:32:36.547244 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-19 06:32:36.604515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-19 06:32:36.604589 | orchestrator | 2025-09-19 06:32:36.604602 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-19 06:32:37.805292 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-19 06:32:37.805412 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-19 06:32:37.805429 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-19 06:32:37.805440 | orchestrator | 2025-09-19 06:32:37.805451 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-19 06:32:39.625127 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-19 06:32:39.625245 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-19 06:32:39.625262 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-19 06:32:39.625275 | orchestrator | 2025-09-19 06:32:39.625289 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-19 06:32:40.218577 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:32:40.218665 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:40.218677 | orchestrator | 2025-09-19 06:32:40.218685 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-19 06:32:40.767902 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:32:40.768013 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:40.768030 | orchestrator | 2025-09-19 06:32:40.768044 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-19 06:32:40.819781 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:32:40.819848 | orchestrator | 2025-09-19 06:32:40.819861 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-19 06:32:41.148368 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:41.148455 | orchestrator | 2025-09-19 06:32:41.148467 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-19 06:32:41.226114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-19 06:32:41.226205 | orchestrator | 2025-09-19 06:32:41.226218 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-19 06:32:42.359215 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:42.359318 | orchestrator | 2025-09-19 06:32:42.359335 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-19 06:32:43.217231 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:43.217353 | orchestrator | 2025-09-19 06:32:43.217371 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-19 06:32:54.893711 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:54.893873 | orchestrator | 2025-09-19 06:32:54.893912 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-19 06:32:54.944673 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:32:54.944757 | orchestrator | 2025-09-19 06:32:54.944771 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-19 06:32:54.944817 | orchestrator | 2025-09-19 06:32:54.944829 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:32:57.780553 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:57.780647 | orchestrator | 2025-09-19 06:32:57.780661 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-19 06:32:57.894399 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-19 06:32:57.894491 | orchestrator | 2025-09-19 06:32:57.894504 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-19 06:32:57.965431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 06:32:57.965522 | orchestrator | 2025-09-19 06:32:57.965537 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-19 06:33:00.399493 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:00.399594 | orchestrator | 2025-09-19 06:33:00.399608 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-19 06:33:00.453516 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:00.453585 | orchestrator | 2025-09-19 06:33:00.453599 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-19 06:33:00.573377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-19 06:33:00.573491 | orchestrator | 2025-09-19 06:33:00.573511 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-19 06:33:03.209669 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-19 06:33:03.209747 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-19 06:33:03.209754 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-19 06:33:03.209761 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-19 06:33:03.209767 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-19 06:33:03.209773 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-19 06:33:03.209778 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-19 06:33:03.209818 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-19 06:33:03.209824 | orchestrator | 2025-09-19 06:33:03.209832 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-19 06:33:03.843122 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:03.843225 | orchestrator | 2025-09-19 06:33:03.843242 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-19 06:33:04.418303 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:04.418430 | orchestrator | 2025-09-19 06:33:04.418448 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-19 06:33:04.495811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-19 06:33:04.495892 | orchestrator | 2025-09-19 06:33:04.495903 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-19 06:33:05.626992 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-19 06:33:05.627114 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-19 06:33:05.627131 | orchestrator | 2025-09-19 06:33:05.627145 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-19 06:33:06.272918 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:06.273019 | orchestrator | 2025-09-19 06:33:06.273033 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-19 06:33:06.323413 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:33:06.323526 | orchestrator | 2025-09-19 06:33:06.323550 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-19 06:33:06.400757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-19 06:33:06.400890 | orchestrator | 2025-09-19 06:33:06.400904 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-19 06:33:06.966130 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:06.966227 | orchestrator | 2025-09-19 06:33:06.966241 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-19 06:33:07.032721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-19 06:33:07.032834 | orchestrator | 2025-09-19 06:33:07.032848 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-19 06:33:08.372402 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:33:08.372505 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:33:08.372521 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:08.372534 | orchestrator | 2025-09-19 06:33:08.372547 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-19 06:33:08.976951 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:08.977057 | orchestrator | 2025-09-19 06:33:08.977072 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-19 06:33:09.019550 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:33:09.019640 | orchestrator | 2025-09-19 06:33:09.019656 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-19 06:33:09.103509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-19 06:33:09.103607 | orchestrator | 2025-09-19 06:33:09.103623 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-19 06:33:09.584915 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:09.585019 | orchestrator | 2025-09-19 06:33:09.585035 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-19 06:33:10.003980 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:10.004076 | orchestrator | 2025-09-19 06:33:10.004092 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-19 06:33:11.368936 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-19 06:33:11.369062 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-19 06:33:11.369078 | orchestrator | 2025-09-19 06:33:11.369091 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-19 06:33:12.086721 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:12.086872 | orchestrator | 2025-09-19 06:33:12.086888 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-19 06:33:12.522008 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:12.522148 | orchestrator | 2025-09-19 06:33:12.522162 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-19 06:33:12.874930 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:12.875029 | orchestrator | 2025-09-19 06:33:12.875045 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-19 06:33:12.917671 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:33:12.917776 | orchestrator | 2025-09-19 06:33:12.917833 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-19 06:33:12.988503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-19 06:33:12.988589 | orchestrator | 2025-09-19 06:33:12.988595 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-19 06:33:13.032140 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:13.032193 | orchestrator | 2025-09-19 06:33:13.032199 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-19 06:33:15.022328 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-19 06:33:15.022428 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-19 06:33:15.022444 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-19 06:33:15.022456 | orchestrator | 2025-09-19 06:33:15.022468 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-19 06:33:15.698322 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:15.698424 | orchestrator | 2025-09-19 06:33:15.698441 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-19 06:33:16.362129 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:16.362222 | orchestrator | 2025-09-19 06:33:16.362234 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-19 06:33:17.114990 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:17.115096 | orchestrator | 2025-09-19 06:33:17.115112 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-19 06:33:17.199692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-19 06:33:17.199814 | orchestrator | 2025-09-19 06:33:17.199831 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-19 06:33:17.255992 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:17.256091 | orchestrator | 2025-09-19 06:33:17.256107 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-19 06:33:18.051535 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-19 06:33:18.051637 | orchestrator | 2025-09-19 06:33:18.051654 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-19 06:33:18.138976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-19 06:33:18.139046 | orchestrator | 2025-09-19 06:33:18.139052 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-19 06:33:18.927427 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:18.927531 | orchestrator | 2025-09-19 06:33:18.927547 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-19 06:33:19.535934 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:19.536026 | orchestrator | 2025-09-19 06:33:19.536039 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-19 06:33:19.587692 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:33:19.587759 | orchestrator | 2025-09-19 06:33:19.587773 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-19 06:33:19.645130 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:19.645217 | orchestrator | 2025-09-19 06:33:19.645232 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-19 06:33:20.504251 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:20.504338 | orchestrator | 2025-09-19 06:33:20.504347 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-19 06:34:25.151242 | orchestrator | changed: [testbed-manager] 2025-09-19 06:34:25.151399 | orchestrator | 2025-09-19 06:34:25.151421 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-19 06:34:26.159902 | orchestrator | ok: [testbed-manager] 2025-09-19 06:34:26.160006 | orchestrator | 2025-09-19 06:34:26.160023 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-19 06:34:26.205811 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:34:26.205869 | orchestrator | 2025-09-19 06:34:26.205886 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-19 06:34:28.936987 | orchestrator | changed: [testbed-manager] 2025-09-19 06:34:28.937130 | orchestrator | 2025-09-19 06:34:28.937156 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-19 06:34:29.028478 | orchestrator | ok: [testbed-manager] 2025-09-19 06:34:29.028572 | orchestrator | 2025-09-19 06:34:29.028588 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 06:34:29.028601 | orchestrator | 2025-09-19 06:34:29.028613 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-19 06:34:29.081049 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:34:29.081175 | orchestrator | 2025-09-19 06:34:29.081199 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-19 06:35:29.127891 | orchestrator | Pausing for 60 seconds 2025-09-19 06:35:29.127988 | orchestrator | changed: [testbed-manager] 2025-09-19 06:35:29.128003 | orchestrator | 2025-09-19 06:35:29.128015 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-19 06:35:32.636123 | orchestrator | changed: [testbed-manager] 2025-09-19 06:35:32.636214 | orchestrator | 2025-09-19 06:35:32.636231 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-19 06:36:14.092801 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-19 06:36:14.092891 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-19 06:36:14.092903 | orchestrator | changed: [testbed-manager] 2025-09-19 06:36:14.092912 | orchestrator | 2025-09-19 06:36:14.092920 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-19 06:36:23.406507 | orchestrator | changed: [testbed-manager] 2025-09-19 06:36:23.406622 | orchestrator | 2025-09-19 06:36:23.406640 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-19 06:36:23.489283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-19 06:36:23.489374 | orchestrator | 2025-09-19 06:36:23.489389 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 06:36:23.489402 | orchestrator | 2025-09-19 06:36:23.489413 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-19 06:36:23.539274 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:36:23.539366 | orchestrator | 2025-09-19 06:36:23.539380 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:36:23.539393 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-19 06:36:23.539409 | orchestrator | 2025-09-19 06:36:23.611878 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 06:36:23.611994 | orchestrator | + deactivate 2025-09-19 06:36:23.612018 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-19 06:36:23.612032 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 06:36:23.612043 | orchestrator | + export PATH 2025-09-19 06:36:23.612055 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-19 06:36:23.612068 | orchestrator | + '[' -n '' ']' 2025-09-19 06:36:23.612088 | orchestrator | + hash -r 2025-09-19 06:36:23.612106 | orchestrator | + '[' -n '' ']' 2025-09-19 06:36:23.612123 | orchestrator | + unset VIRTUAL_ENV 2025-09-19 06:36:23.612141 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-19 06:36:23.612160 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-19 06:36:23.612179 | orchestrator | + unset -f deactivate 2025-09-19 06:36:23.612200 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-19 06:36:23.616661 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 06:36:23.616691 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 06:36:23.616703 | orchestrator | + local max_attempts=60 2025-09-19 06:36:23.616714 | orchestrator | + local name=ceph-ansible 2025-09-19 06:36:23.616759 | orchestrator | + local attempt_num=1 2025-09-19 06:36:23.617417 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:36:23.649247 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:36:23.649325 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 06:36:23.649339 | orchestrator | + local max_attempts=60 2025-09-19 06:36:23.649352 | orchestrator | + local name=kolla-ansible 2025-09-19 06:36:23.649396 | orchestrator | + local attempt_num=1 2025-09-19 06:36:23.649408 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 06:36:23.681204 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:36:23.681489 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 06:36:23.681589 | orchestrator | + local max_attempts=60 2025-09-19 06:36:23.681604 | orchestrator | + local name=osism-ansible 2025-09-19 06:36:23.681616 | orchestrator | + local attempt_num=1 2025-09-19 06:36:23.682164 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 06:36:23.711017 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:36:23.711069 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 06:36:23.711081 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 06:36:24.314567 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-19 06:36:24.502250 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-19 06:36:24.502348 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-19 06:36:24.502362 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-19 06:36:24.502373 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-19 06:36:24.502387 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-19 06:36:24.502399 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-19 06:36:24.502410 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-19 06:36:24.502421 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-19 06:36:24.502432 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-19 06:36:24.502443 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-19 06:36:24.502454 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-19 06:36:24.502465 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-19 06:36:24.502476 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-19 06:36:24.502487 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-19 06:36:24.502499 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-19 06:36:24.502550 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-19 06:36:24.507994 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-19 06:36:24.552838 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 06:36:24.552922 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-19 06:36:24.557092 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-19 06:36:36.464430 | orchestrator | 2025-09-19 06:36:36 | INFO  | Task 00ab94b2-9e9b-446c-8be4-e6d013559565 (resolvconf) was prepared for execution. 2025-09-19 06:36:36.464522 | orchestrator | 2025-09-19 06:36:36 | INFO  | It takes a moment until task 00ab94b2-9e9b-446c-8be4-e6d013559565 (resolvconf) has been started and output is visible here. 2025-09-19 06:36:50.211001 | orchestrator | 2025-09-19 06:36:50.211119 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-19 06:36:50.211136 | orchestrator | 2025-09-19 06:36:50.211149 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:36:50.211161 | orchestrator | Friday 19 September 2025 06:36:40 +0000 (0:00:00.150) 0:00:00.150 ****** 2025-09-19 06:36:50.211173 | orchestrator | ok: [testbed-manager] 2025-09-19 06:36:50.211185 | orchestrator | 2025-09-19 06:36:50.211196 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 06:36:50.211208 | orchestrator | Friday 19 September 2025 06:36:44 +0000 (0:00:03.828) 0:00:03.979 ****** 2025-09-19 06:36:50.211219 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:36:50.211231 | orchestrator | 2025-09-19 06:36:50.211241 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 06:36:50.211252 | orchestrator | Friday 19 September 2025 06:36:44 +0000 (0:00:00.049) 0:00:04.028 ****** 2025-09-19 06:36:50.211264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-19 06:36:50.211276 | orchestrator | 2025-09-19 06:36:50.211286 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 06:36:50.211297 | orchestrator | Friday 19 September 2025 06:36:44 +0000 (0:00:00.067) 0:00:04.096 ****** 2025-09-19 06:36:50.211309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 06:36:50.211320 | orchestrator | 2025-09-19 06:36:50.211331 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 06:36:50.211342 | orchestrator | Friday 19 September 2025 06:36:44 +0000 (0:00:00.068) 0:00:04.165 ****** 2025-09-19 06:36:50.211353 | orchestrator | ok: [testbed-manager] 2025-09-19 06:36:50.211364 | orchestrator | 2025-09-19 06:36:50.211375 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 06:36:50.211386 | orchestrator | Friday 19 September 2025 06:36:45 +0000 (0:00:01.111) 0:00:05.277 ****** 2025-09-19 06:36:50.211396 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:36:50.211408 | orchestrator | 2025-09-19 06:36:50.211419 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 06:36:50.211431 | orchestrator | Friday 19 September 2025 06:36:45 +0000 (0:00:00.066) 0:00:05.344 ****** 2025-09-19 06:36:50.211441 | orchestrator | ok: [testbed-manager] 2025-09-19 06:36:50.211452 | orchestrator | 2025-09-19 06:36:50.211463 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 06:36:50.211474 | orchestrator | Friday 19 September 2025 06:36:46 +0000 (0:00:00.505) 0:00:05.849 ****** 2025-09-19 06:36:50.211485 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:36:50.211496 | orchestrator | 2025-09-19 06:36:50.211507 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 06:36:50.211543 | orchestrator | Friday 19 September 2025 06:36:46 +0000 (0:00:00.086) 0:00:05.935 ****** 2025-09-19 06:36:50.211557 | orchestrator | changed: [testbed-manager] 2025-09-19 06:36:50.211570 | orchestrator | 2025-09-19 06:36:50.211582 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 06:36:50.211594 | orchestrator | Friday 19 September 2025 06:36:46 +0000 (0:00:00.523) 0:00:06.459 ****** 2025-09-19 06:36:50.211607 | orchestrator | changed: [testbed-manager] 2025-09-19 06:36:50.211619 | orchestrator | 2025-09-19 06:36:50.211631 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 06:36:50.211644 | orchestrator | Friday 19 September 2025 06:36:47 +0000 (0:00:01.096) 0:00:07.555 ****** 2025-09-19 06:36:50.211656 | orchestrator | ok: [testbed-manager] 2025-09-19 06:36:50.211668 | orchestrator | 2025-09-19 06:36:50.211680 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 06:36:50.211704 | orchestrator | Friday 19 September 2025 06:36:48 +0000 (0:00:00.987) 0:00:08.543 ****** 2025-09-19 06:36:50.211747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-19 06:36:50.211759 | orchestrator | 2025-09-19 06:36:50.211772 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 06:36:50.211785 | orchestrator | Friday 19 September 2025 06:36:48 +0000 (0:00:00.088) 0:00:08.632 ****** 2025-09-19 06:36:50.211797 | orchestrator | changed: [testbed-manager] 2025-09-19 06:36:50.211810 | orchestrator | 2025-09-19 06:36:50.211822 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:36:50.211836 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 06:36:50.211849 | orchestrator | 2025-09-19 06:36:50.211862 | orchestrator | 2025-09-19 06:36:50.211875 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:36:50.211888 | orchestrator | Friday 19 September 2025 06:36:49 +0000 (0:00:01.143) 0:00:09.775 ****** 2025-09-19 06:36:50.211900 | orchestrator | =============================================================================== 2025-09-19 06:36:50.211911 | orchestrator | Gathering Facts --------------------------------------------------------- 3.83s 2025-09-19 06:36:50.211921 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2025-09-19 06:36:50.211932 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2025-09-19 06:36:50.211943 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2025-09-19 06:36:50.211954 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2025-09-19 06:36:50.211965 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-09-19 06:36:50.211993 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2025-09-19 06:36:50.212004 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-19 06:36:50.212015 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-09-19 06:36:50.212026 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-19 06:36:50.212037 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-09-19 06:36:50.212048 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-09-19 06:36:50.212059 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-09-19 06:36:50.481528 | orchestrator | + osism apply sshconfig 2025-09-19 06:37:02.413936 | orchestrator | 2025-09-19 06:37:02 | INFO  | Task c0abcb4d-019e-4123-aa37-9286f45758bc (sshconfig) was prepared for execution. 2025-09-19 06:37:02.414125 | orchestrator | 2025-09-19 06:37:02 | INFO  | It takes a moment until task c0abcb4d-019e-4123-aa37-9286f45758bc (sshconfig) has been started and output is visible here. 2025-09-19 06:37:14.098869 | orchestrator | 2025-09-19 06:37:14.099006 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-19 06:37:14.099035 | orchestrator | 2025-09-19 06:37:14.099054 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-19 06:37:14.099074 | orchestrator | Friday 19 September 2025 06:37:06 +0000 (0:00:00.163) 0:00:00.163 ****** 2025-09-19 06:37:14.099094 | orchestrator | ok: [testbed-manager] 2025-09-19 06:37:14.099115 | orchestrator | 2025-09-19 06:37:14.099130 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-19 06:37:14.099141 | orchestrator | Friday 19 September 2025 06:37:06 +0000 (0:00:00.574) 0:00:00.738 ****** 2025-09-19 06:37:14.099152 | orchestrator | changed: [testbed-manager] 2025-09-19 06:37:14.099164 | orchestrator | 2025-09-19 06:37:14.099175 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-19 06:37:14.099186 | orchestrator | Friday 19 September 2025 06:37:07 +0000 (0:00:00.513) 0:00:01.251 ****** 2025-09-19 06:37:14.099198 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-19 06:37:14.099209 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-19 06:37:14.099221 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-19 06:37:14.099232 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-19 06:37:14.099243 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-19 06:37:14.099254 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-19 06:37:14.099265 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-19 06:37:14.099276 | orchestrator | 2025-09-19 06:37:14.099287 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-19 06:37:14.099298 | orchestrator | Friday 19 September 2025 06:37:13 +0000 (0:00:05.757) 0:00:07.008 ****** 2025-09-19 06:37:14.099331 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:37:14.099346 | orchestrator | 2025-09-19 06:37:14.099359 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-19 06:37:14.099371 | orchestrator | Friday 19 September 2025 06:37:13 +0000 (0:00:00.075) 0:00:07.084 ****** 2025-09-19 06:37:14.099384 | orchestrator | changed: [testbed-manager] 2025-09-19 06:37:14.099396 | orchestrator | 2025-09-19 06:37:14.099409 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:37:14.099423 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:37:14.099436 | orchestrator | 2025-09-19 06:37:14.099449 | orchestrator | 2025-09-19 06:37:14.099463 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:37:14.099476 | orchestrator | Friday 19 September 2025 06:37:13 +0000 (0:00:00.605) 0:00:07.689 ****** 2025-09-19 06:37:14.099489 | orchestrator | =============================================================================== 2025-09-19 06:37:14.099502 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.76s 2025-09-19 06:37:14.099515 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-09-19 06:37:14.099528 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2025-09-19 06:37:14.099540 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2025-09-19 06:37:14.099554 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-09-19 06:37:14.357612 | orchestrator | + osism apply known-hosts 2025-09-19 06:37:26.411024 | orchestrator | 2025-09-19 06:37:26 | INFO  | Task 9aedbddc-4fb0-4463-bba5-7aa2bfcca07a (known-hosts) was prepared for execution. 2025-09-19 06:37:26.411135 | orchestrator | 2025-09-19 06:37:26 | INFO  | It takes a moment until task 9aedbddc-4fb0-4463-bba5-7aa2bfcca07a (known-hosts) has been started and output is visible here. 2025-09-19 06:37:43.255837 | orchestrator | 2025-09-19 06:37:43.255946 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-19 06:37:43.255963 | orchestrator | 2025-09-19 06:37:43.255975 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-19 06:37:43.255988 | orchestrator | Friday 19 September 2025 06:37:30 +0000 (0:00:00.168) 0:00:00.168 ****** 2025-09-19 06:37:43.255999 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 06:37:43.256011 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 06:37:43.256022 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 06:37:43.256033 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 06:37:43.256044 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 06:37:43.256055 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 06:37:43.256066 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 06:37:43.256077 | orchestrator | 2025-09-19 06:37:43.256088 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-19 06:37:43.256100 | orchestrator | Friday 19 September 2025 06:37:36 +0000 (0:00:06.105) 0:00:06.274 ****** 2025-09-19 06:37:43.256112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 06:37:43.256126 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 06:37:43.256137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 06:37:43.256148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 06:37:43.256159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 06:37:43.256170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 06:37:43.256181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 06:37:43.256192 | orchestrator | 2025-09-19 06:37:43.256203 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:43.256214 | orchestrator | Friday 19 September 2025 06:37:36 +0000 (0:00:00.159) 0:00:06.433 ****** 2025-09-19 06:37:43.256235 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNaL8YHLjpN13AmgVHp6tUoBReW5KZQRPuWoNCZG/B+RFv0eS3SipWklge1N8J99D3y4vqJ+AaOSTGccC/ejDnk=) 2025-09-19 06:37:43.256251 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCCYnDFiMIZ3+3Iqc4CxppXFdNYXJ+4zcltmv+Q3KkMIyhagOLnpzhsQenpvBLGgJjt47DVu7GqitTAeSAWGpifdTyN1HM5TwwQYR7U2QiQ3lu0SuRGisT3u5HnwKeXCdQhHSnKeevsLdh837WLrN6R3ZRvCZA2P52OCYPmMUaO80kiJBhxuohPIn+OxT3Z0XjytmuXQouaGeOb9JI6M5a3/bT6o2teV3aeFevOrrvSD0z022FuKAv50PaAjiqu4r9175oIH4i9v3Be5NNBjT4DtKdGn7K/EZNScODFSIUf6GZ0jSFWEUP0WZRh8ueoMw6g+J281bGhy5hk1Ae83Vari0XE33LHkV5Z1+M0tnQutlxbZyfShinbL4r8Ms9DS4EQsFP3vxLexGTp8iqAIOK0o2Xo17aF5/Xxrg0kDAVzQmiJ3BRYQ5gVb2RMg9djviO3RE2rAUbarirWu5hC3MDfIrNbdF0Yb9odNlDtJOMcgH3cLSp8JLVaaO6j6duAMtM=) 2025-09-19 06:37:43.256266 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJeKbFMSSj3oJac3H7qYwUlEkUQ2wXfiBu4Rz16ZA22M) 2025-09-19 06:37:43.256299 | orchestrator | 2025-09-19 06:37:43.256311 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:43.256322 | orchestrator | Friday 19 September 2025 06:37:37 +0000 (0:00:01.217) 0:00:07.651 ****** 2025-09-19 06:37:43.256334 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNF9YORgnezlgUVFWc5jVnxo7Q+rCB4TcYjpBvVWglwzYdhcma8Yc5JT3euFCfTQ2gGUZu/5rt7cmymJgPHf4lU=) 2025-09-19 06:37:43.256347 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIObYfCTR6zEwFXf7M86+e/N7dEYzaUDJzhFgst5AEiqv) 2025-09-19 06:37:43.256387 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcsEEff7GpFYL3EgDYMTfrDbDFPQQp1q3tj25Vq5MMOMZrgMw7cZe3DlQkLII7C6zpAwihoyjci0z3JoxcIXgG/Pd58N9SSAEs57tbcPvlH6eiimNBbLMKcKTi2AvWHREveOTyWUWZgM9awmmnbnR97wUPCFc3gbLC4ZDg/UTdcbhuAVUKX8iJ1lQiY+4uphTGJdea/BiPPILmQEdbs8nmBBMxj6KS1ev8iNPKVPx8yO7Zz5DA20pn4uEVtNEVZn3RVUhhvXiP5hFCqn7Y3pH1+S7++Ej5P3XZzvw+GkCnUGHHYIwCl1FSc9gl5I2MBMg6u4jxezLoVDk8Q/uU8MGN2Sr6IPUF/mn7fXrOqkfeqpiVrMOFUOiaA9rvFD/roLK8SgLJQY4LpPtpfAHmhF5rSGT60OXkwxWu13/cayotWxSmWi4mSn+AJ2acINXbCSDAuV9hlu74kwjt2phI+F4WNrmLmDFBqMJpoG8cRExDR4LxCF203KJAYpu+duOEXus=) 2025-09-19 06:37:43.256402 | orchestrator | 2025-09-19 06:37:43.256414 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:43.256427 | orchestrator | Friday 19 September 2025 06:37:38 +0000 (0:00:01.101) 0:00:08.752 ****** 2025-09-19 06:37:43.256439 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFPgYwK4FbbZjfFs7FsqRRa5/LCnBaE8PyyjGldBbaRmEui4vYfTRV530p6GHwvXyZUA0pukdIisoN3QRKGhrQl8B8A7r/zgsAWS2DTP+Pf/307W5/txr1tp5bbrZLNGGypA3Qs12chsH7InrMwylN76qxVU2mpa3x8qt6te51OTPkXOIoT1xUGtHEXFAHREJmZBIchy2PQV+1p/IWQ6uCHRziRkKHKn9WzwgKXDmYbcpfqh5P0hNh7iIG8Sy4k9r/ch9UZ6M8VDCIrClkLx4QXj8WAl9ePpF1XAUjRKA3NaBf1lu/YZSCSgBhia1iiOj6ZYHEQAP69s/iBTtkdEP1zJCIQUIudPwOKkq/lyK0jalPOtjJ68vZYe9pY/p07MFdTeGdA47XqHzRp7K+vmHc0/5lv0gf8aiXIkW49SfCJ7iu8/i62JwqhuLWk9fAxdXmw10Yht/+XMA/ipMeGlLx6Is/3Ml6OIID5ASp6/pAhiwklGj9H00xJLODWBELF/s=) 2025-09-19 06:37:43.256451 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOEdwQ7bI/Y4jvSiLRdrdOgxTAZntVn/XR6IfMnjnGk5vI0a7OQ/DbFVL9krdpXQHsjHyAEiS1juiZgusOJrrss=) 2025-09-19 06:37:43.256463 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHXna/sD9oEG4shvTKvFiuDkl4JUto73s0066Uxcs2kS) 2025-09-19 06:37:43.256474 | orchestrator | 2025-09-19 06:37:43.256486 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:43.256497 | orchestrator | Friday 19 September 2025 06:37:39 +0000 (0:00:01.073) 0:00:09.826 ****** 2025-09-19 06:37:43.256508 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFscX4lRLrx124a3OF0wQjBAk+CUETRYyTVpk/6LH4eV) 2025-09-19 06:37:43.256520 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC44V+IJuUvxPICL84yowgciwPXkz7XYJFGbRfsWp6Beva9o2YPNz35KoUdnZTDzFvIRAKrLswc5bgq0uVLYtiCdbVL6t67m9bteu1U1sc7Lpm9HxREDY6oX4u7fibAUUnbMIyPnHfeZMHiNwdWWmDt6A7y0nxJFvDSMfHYauE2QC0pWRCbvVve4EnKWM92hJj4yptglJE7I5YJDyzNKJX8t4j4zPoalyUWKTGu7lnO9mzUbSLD1tSEL8J9DXG5XzbW7U8M+9PCbU5z8msu9uFs8VEGLcxyN/jIcVusl6mQRITBP/LZjj+cFM17Cws9wxT4NEoDy+Q3IE26kEicnxJ6fWG6KMdHtvHX6wfKa3aH8JIMkQ0mplPHnf+1hEgVfmqcidTFQy+9clC03QlwP63MlPbngXL7IbawMjMq5ZIHEZddVimTnrgmNB+1GIa1LXnFxNXP5FyW6reE9s29Ks+GlruPpicGSXT5ZXfkxsabzTtkfTGHMKjtypU8BFQ9/3k=) 2025-09-19 06:37:43.256532 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDoL5H1hgoW/XKtkJspSF5dp6Bvjg7ZMaIRNgDHfAZg59J4Kcx58z1bjKP0pJic6FwRfbrg52HhA+onOvfnQ+WA=) 2025-09-19 06:37:43.256551 | orchestrator | 2025-09-19 06:37:43.256562 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:43.256573 | orchestrator | Friday 19 September 2025 06:37:41 +0000 (0:00:01.064) 0:00:10.890 ****** 2025-09-19 06:37:43.256652 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMyajlxiVns+jz5+o6O6FR3hBUylCTtpg+1ipaKkJDo4Q2dgc9CZCRlYgPuILmnUsmNkNFolUmko4BE2JEFS+64=) 2025-09-19 06:37:43.256664 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChEn7LCI5cmdRzaMNYz7fViCmhxEHa5LBz0q4iqZ1/mSDPIHtYi/9UcA4Vo0zmsny+TqnQAAgU2K2rgTNZ/Bkmnb+ERLToltOpnUY7i1SznW9/P4wH27gQhkAt6CcDdaNyOHJ4braQVjxnXuqfmxmbjIwNPfE2+dFrjHiuMu8J2HNpFeQZUOQCIuNREZQHfO5WF3fzQDaw57NRo9p7cNlTu+zPwME7q4yc6Fbb2fkkNmGLV9MlcRrwd+BL6ll5G/+NCrehIlX8mycAl2I4YJ6HJLST9rBcjT15ZKoU4v2iKfuwr0dkmih0iYgV+tUspws47e4ptvpUkg37t/cBcnKmMyD2GWMBMBzyZqux4+MmO/SnxLjhHYX/xFbZFyUtD8PAapZXNe+J+HjkyRc4KhHDkYXsxZx7mRCo110m1XK/EIBBhp77pZLU7uIi8JURo815aG9mveu0WiVQFKmBRH1oHK94sPyiWcWtdKhYY7SZZyFp43vL9YujzSQD9d2L2qM=) 2025-09-19 06:37:43.256680 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMf3lVzr8Jd0OHpxz+TnJy0nv9KEhuksOKupfw8YTyDn) 2025-09-19 06:37:43.256692 | orchestrator | 2025-09-19 06:37:43.256728 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:43.256739 | orchestrator | Friday 19 September 2025 06:37:42 +0000 (0:00:01.073) 0:00:11.964 ****** 2025-09-19 06:37:43.256757 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBODknWzj34KhLNkp55zOy54tLg2VEEtui4UCLKm73SO) 2025-09-19 06:37:53.353491 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0hbWSBqkan95ZxoIS8i7u8Mh5emjNqHYnaNgdLqCkFj58DevVWN0VWo+IcnfbrXMKngXKiQGR4Nxq4BgPsKhehQVp/T7lnHOOQM8tsS8jQS984EV2METVYEmz7tBFD11AIpil1aaLJvDOqCfrkHyHALaM9eKzgXJlR8J88YiVa32QKtNPL0s1wAoF77s8ctJFHpfP4Xx6jDB6t/GLwUsMQ5K0igFHqrG+IiAHrtCLuHK6WIvYNhDh30Ze9lNj1UJwVwlM8IQTkxVmqrWICQA+5rU2+aQSrc+aTQAn+EoRUhHbtz9gTvI/Vn8QM90lhtwrH01G7ygmYC21pcuSB2AzUgBH2nb09zp+nrmyuZhaz/aSeWpmnSjAzMOuMu4Sxo0bM/Z4lK2mAia9fE1Xoh2nfReY9Dvl1D1oZ7E5ue616ekYPqYKoAl7n/csRYmGtDNY2cJAQDN9ahxcCpzOkbS7AVvnAuqWNN09y74GaIFO1X57EaCenc1KXvlOgrl8ank=) 2025-09-19 06:37:53.353600 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHDX+UBAJ/O1m2q0KekzdVvLKPumrcrCoG6XzgUXtYZvE+Z/GCTqLtLx8rt4uFkwdxhUxBjac0SMC5wow8OeTdA=) 2025-09-19 06:37:53.353618 | orchestrator | 2025-09-19 06:37:53.353631 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:53.353644 | orchestrator | Friday 19 September 2025 06:37:43 +0000 (0:00:01.143) 0:00:13.107 ****** 2025-09-19 06:37:53.353656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCR4Gt+nrELwG5g+VF4q8unusqemPykHmBzKFbPAmVuoLT7twYmZD5ttndDOn/dqiUTfNpC2zTQJ3/znnhvpuvh+tWGHNUU9ctSsgl6Qox6uFcVNHBj5bnXvAagM00dspvzwgi5GtYnkWWmqKmy+edO4SE8DenRiEfy8+9ZVpKL3R8EYHC+sL8GaQETI0pPDHqxeEfi6/31I9Bc1jMR8KJ9H9sK3UA25FFsSMkZlYlQZe4oqDvlIRv9uPEkPBhTfVb1mO/5Lp0nMUx2EPFylnmmOgIMX2h5D8wt0Q2NKacG3GoCjHIC6Q3zDlWRXyayP8fMIxWHVW3US9SEPruXd5X7zWFOGrUjlOaB8k4Zz0rYfd+t2hEwEveqEFVyUmIZrx8NwfDdJ54lOJhHQrIMpOq9uEjqpjCpi2hWiDco46uG9LY4/I4qyp38BYMzylaUwayW0egON4/GiwNmz6Ey5g6IhalarvTP6iHJIN0TPtc7NB3p32sA8fThxpvvygsvYg8=) 2025-09-19 06:37:53.353668 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIKo/tOmvpVXx202tWJX9uon2FJ5NqSIdQOD/BdFGNQpgStex2YQSQGoDCwb7njwmFOOf1xTD9b+cpHI1rRSWRU=) 2025-09-19 06:37:53.353680 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEv5D8Q01WWa9mqkMAHLaTDyYmPI1ikj24zl1DEpFqVK) 2025-09-19 06:37:53.353692 | orchestrator | 2025-09-19 06:37:53.353752 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-19 06:37:53.353783 | orchestrator | Friday 19 September 2025 06:37:44 +0000 (0:00:01.084) 0:00:14.191 ****** 2025-09-19 06:37:53.353796 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 06:37:53.353808 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 06:37:53.353819 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 06:37:53.353830 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 06:37:53.353841 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 06:37:53.353852 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 06:37:53.353863 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 06:37:53.353875 | orchestrator | 2025-09-19 06:37:53.353886 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-19 06:37:53.353898 | orchestrator | Friday 19 September 2025 06:37:49 +0000 (0:00:05.059) 0:00:19.251 ****** 2025-09-19 06:37:53.353910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 06:37:53.353924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 06:37:53.353935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 06:37:53.353946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 06:37:53.353957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 06:37:53.353968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 06:37:53.353990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 06:37:53.354001 | orchestrator | 2025-09-19 06:37:53.354101 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:53.354118 | orchestrator | Friday 19 September 2025 06:37:49 +0000 (0:00:00.160) 0:00:19.412 ****** 2025-09-19 06:37:53.354132 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJeKbFMSSj3oJac3H7qYwUlEkUQ2wXfiBu4Rz16ZA22M) 2025-09-19 06:37:53.354147 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCCYnDFiMIZ3+3Iqc4CxppXFdNYXJ+4zcltmv+Q3KkMIyhagOLnpzhsQenpvBLGgJjt47DVu7GqitTAeSAWGpifdTyN1HM5TwwQYR7U2QiQ3lu0SuRGisT3u5HnwKeXCdQhHSnKeevsLdh837WLrN6R3ZRvCZA2P52OCYPmMUaO80kiJBhxuohPIn+OxT3Z0XjytmuXQouaGeOb9JI6M5a3/bT6o2teV3aeFevOrrvSD0z022FuKAv50PaAjiqu4r9175oIH4i9v3Be5NNBjT4DtKdGn7K/EZNScODFSIUf6GZ0jSFWEUP0WZRh8ueoMw6g+J281bGhy5hk1Ae83Vari0XE33LHkV5Z1+M0tnQutlxbZyfShinbL4r8Ms9DS4EQsFP3vxLexGTp8iqAIOK0o2Xo17aF5/Xxrg0kDAVzQmiJ3BRYQ5gVb2RMg9djviO3RE2rAUbarirWu5hC3MDfIrNbdF0Yb9odNlDtJOMcgH3cLSp8JLVaaO6j6duAMtM=) 2025-09-19 06:37:53.354161 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNaL8YHLjpN13AmgVHp6tUoBReW5KZQRPuWoNCZG/B+RFv0eS3SipWklge1N8J99D3y4vqJ+AaOSTGccC/ejDnk=) 2025-09-19 06:37:53.354173 | orchestrator | 2025-09-19 06:37:53.354186 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:53.354199 | orchestrator | Friday 19 September 2025 06:37:50 +0000 (0:00:00.947) 0:00:20.360 ****** 2025-09-19 06:37:53.354220 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNF9YORgnezlgUVFWc5jVnxo7Q+rCB4TcYjpBvVWglwzYdhcma8Yc5JT3euFCfTQ2gGUZu/5rt7cmymJgPHf4lU=) 2025-09-19 06:37:53.354233 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcsEEff7GpFYL3EgDYMTfrDbDFPQQp1q3tj25Vq5MMOMZrgMw7cZe3DlQkLII7C6zpAwihoyjci0z3JoxcIXgG/Pd58N9SSAEs57tbcPvlH6eiimNBbLMKcKTi2AvWHREveOTyWUWZgM9awmmnbnR97wUPCFc3gbLC4ZDg/UTdcbhuAVUKX8iJ1lQiY+4uphTGJdea/BiPPILmQEdbs8nmBBMxj6KS1ev8iNPKVPx8yO7Zz5DA20pn4uEVtNEVZn3RVUhhvXiP5hFCqn7Y3pH1+S7++Ej5P3XZzvw+GkCnUGHHYIwCl1FSc9gl5I2MBMg6u4jxezLoVDk8Q/uU8MGN2Sr6IPUF/mn7fXrOqkfeqpiVrMOFUOiaA9rvFD/roLK8SgLJQY4LpPtpfAHmhF5rSGT60OXkwxWu13/cayotWxSmWi4mSn+AJ2acINXbCSDAuV9hlu74kwjt2phI+F4WNrmLmDFBqMJpoG8cRExDR4LxCF203KJAYpu+duOEXus=) 2025-09-19 06:37:53.354247 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIObYfCTR6zEwFXf7M86+e/N7dEYzaUDJzhFgst5AEiqv) 2025-09-19 06:37:53.354259 | orchestrator | 2025-09-19 06:37:53.354272 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:53.354284 | orchestrator | Friday 19 September 2025 06:37:51 +0000 (0:00:00.947) 0:00:21.308 ****** 2025-09-19 06:37:53.354298 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFPgYwK4FbbZjfFs7FsqRRa5/LCnBaE8PyyjGldBbaRmEui4vYfTRV530p6GHwvXyZUA0pukdIisoN3QRKGhrQl8B8A7r/zgsAWS2DTP+Pf/307W5/txr1tp5bbrZLNGGypA3Qs12chsH7InrMwylN76qxVU2mpa3x8qt6te51OTPkXOIoT1xUGtHEXFAHREJmZBIchy2PQV+1p/IWQ6uCHRziRkKHKn9WzwgKXDmYbcpfqh5P0hNh7iIG8Sy4k9r/ch9UZ6M8VDCIrClkLx4QXj8WAl9ePpF1XAUjRKA3NaBf1lu/YZSCSgBhia1iiOj6ZYHEQAP69s/iBTtkdEP1zJCIQUIudPwOKkq/lyK0jalPOtjJ68vZYe9pY/p07MFdTeGdA47XqHzRp7K+vmHc0/5lv0gf8aiXIkW49SfCJ7iu8/i62JwqhuLWk9fAxdXmw10Yht/+XMA/ipMeGlLx6Is/3Ml6OIID5ASp6/pAhiwklGj9H00xJLODWBELF/s=) 2025-09-19 06:37:53.354311 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOEdwQ7bI/Y4jvSiLRdrdOgxTAZntVn/XR6IfMnjnGk5vI0a7OQ/DbFVL9krdpXQHsjHyAEiS1juiZgusOJrrss=) 2025-09-19 06:37:53.354323 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHXna/sD9oEG4shvTKvFiuDkl4JUto73s0066Uxcs2kS) 2025-09-19 06:37:53.354336 | orchestrator | 2025-09-19 06:37:53.354348 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:53.354362 | orchestrator | Friday 19 September 2025 06:37:52 +0000 (0:00:00.934) 0:00:22.242 ****** 2025-09-19 06:37:53.354374 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFscX4lRLrx124a3OF0wQjBAk+CUETRYyTVpk/6LH4eV) 2025-09-19 06:37:53.354405 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC44V+IJuUvxPICL84yowgciwPXkz7XYJFGbRfsWp6Beva9o2YPNz35KoUdnZTDzFvIRAKrLswc5bgq0uVLYtiCdbVL6t67m9bteu1U1sc7Lpm9HxREDY6oX4u7fibAUUnbMIyPnHfeZMHiNwdWWmDt6A7y0nxJFvDSMfHYauE2QC0pWRCbvVve4EnKWM92hJj4yptglJE7I5YJDyzNKJX8t4j4zPoalyUWKTGu7lnO9mzUbSLD1tSEL8J9DXG5XzbW7U8M+9PCbU5z8msu9uFs8VEGLcxyN/jIcVusl6mQRITBP/LZjj+cFM17Cws9wxT4NEoDy+Q3IE26kEicnxJ6fWG6KMdHtvHX6wfKa3aH8JIMkQ0mplPHnf+1hEgVfmqcidTFQy+9clC03QlwP63MlPbngXL7IbawMjMq5ZIHEZddVimTnrgmNB+1GIa1LXnFxNXP5FyW6reE9s29Ks+GlruPpicGSXT5ZXfkxsabzTtkfTGHMKjtypU8BFQ9/3k=) 2025-09-19 06:37:57.353008 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDoL5H1hgoW/XKtkJspSF5dp6Bvjg7ZMaIRNgDHfAZg59J4Kcx58z1bjKP0pJic6FwRfbrg52HhA+onOvfnQ+WA=) 2025-09-19 06:37:57.353115 | orchestrator | 2025-09-19 06:37:57.353132 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:57.353146 | orchestrator | Friday 19 September 2025 06:37:53 +0000 (0:00:00.963) 0:00:23.206 ****** 2025-09-19 06:37:57.353160 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChEn7LCI5cmdRzaMNYz7fViCmhxEHa5LBz0q4iqZ1/mSDPIHtYi/9UcA4Vo0zmsny+TqnQAAgU2K2rgTNZ/Bkmnb+ERLToltOpnUY7i1SznW9/P4wH27gQhkAt6CcDdaNyOHJ4braQVjxnXuqfmxmbjIwNPfE2+dFrjHiuMu8J2HNpFeQZUOQCIuNREZQHfO5WF3fzQDaw57NRo9p7cNlTu+zPwME7q4yc6Fbb2fkkNmGLV9MlcRrwd+BL6ll5G/+NCrehIlX8mycAl2I4YJ6HJLST9rBcjT15ZKoU4v2iKfuwr0dkmih0iYgV+tUspws47e4ptvpUkg37t/cBcnKmMyD2GWMBMBzyZqux4+MmO/SnxLjhHYX/xFbZFyUtD8PAapZXNe+J+HjkyRc4KhHDkYXsxZx7mRCo110m1XK/EIBBhp77pZLU7uIi8JURo815aG9mveu0WiVQFKmBRH1oHK94sPyiWcWtdKhYY7SZZyFp43vL9YujzSQD9d2L2qM=) 2025-09-19 06:37:57.353198 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMyajlxiVns+jz5+o6O6FR3hBUylCTtpg+1ipaKkJDo4Q2dgc9CZCRlYgPuILmnUsmNkNFolUmko4BE2JEFS+64=) 2025-09-19 06:37:57.353227 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMf3lVzr8Jd0OHpxz+TnJy0nv9KEhuksOKupfw8YTyDn) 2025-09-19 06:37:57.353240 | orchestrator | 2025-09-19 06:37:57.353251 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:57.353263 | orchestrator | Friday 19 September 2025 06:37:54 +0000 (0:00:01.065) 0:00:24.271 ****** 2025-09-19 06:37:57.353274 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0hbWSBqkan95ZxoIS8i7u8Mh5emjNqHYnaNgdLqCkFj58DevVWN0VWo+IcnfbrXMKngXKiQGR4Nxq4BgPsKhehQVp/T7lnHOOQM8tsS8jQS984EV2METVYEmz7tBFD11AIpil1aaLJvDOqCfrkHyHALaM9eKzgXJlR8J88YiVa32QKtNPL0s1wAoF77s8ctJFHpfP4Xx6jDB6t/GLwUsMQ5K0igFHqrG+IiAHrtCLuHK6WIvYNhDh30Ze9lNj1UJwVwlM8IQTkxVmqrWICQA+5rU2+aQSrc+aTQAn+EoRUhHbtz9gTvI/Vn8QM90lhtwrH01G7ygmYC21pcuSB2AzUgBH2nb09zp+nrmyuZhaz/aSeWpmnSjAzMOuMu4Sxo0bM/Z4lK2mAia9fE1Xoh2nfReY9Dvl1D1oZ7E5ue616ekYPqYKoAl7n/csRYmGtDNY2cJAQDN9ahxcCpzOkbS7AVvnAuqWNN09y74GaIFO1X57EaCenc1KXvlOgrl8ank=) 2025-09-19 06:37:57.353286 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHDX+UBAJ/O1m2q0KekzdVvLKPumrcrCoG6XzgUXtYZvE+Z/GCTqLtLx8rt4uFkwdxhUxBjac0SMC5wow8OeTdA=) 2025-09-19 06:37:57.353297 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBODknWzj34KhLNkp55zOy54tLg2VEEtui4UCLKm73SO) 2025-09-19 06:37:57.353308 | orchestrator | 2025-09-19 06:37:57.353319 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:37:57.353330 | orchestrator | Friday 19 September 2025 06:37:55 +0000 (0:00:01.063) 0:00:25.335 ****** 2025-09-19 06:37:57.353342 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIKo/tOmvpVXx202tWJX9uon2FJ5NqSIdQOD/BdFGNQpgStex2YQSQGoDCwb7njwmFOOf1xTD9b+cpHI1rRSWRU=) 2025-09-19 06:37:57.353358 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCR4Gt+nrELwG5g+VF4q8unusqemPykHmBzKFbPAmVuoLT7twYmZD5ttndDOn/dqiUTfNpC2zTQJ3/znnhvpuvh+tWGHNUU9ctSsgl6Qox6uFcVNHBj5bnXvAagM00dspvzwgi5GtYnkWWmqKmy+edO4SE8DenRiEfy8+9ZVpKL3R8EYHC+sL8GaQETI0pPDHqxeEfi6/31I9Bc1jMR8KJ9H9sK3UA25FFsSMkZlYlQZe4oqDvlIRv9uPEkPBhTfVb1mO/5Lp0nMUx2EPFylnmmOgIMX2h5D8wt0Q2NKacG3GoCjHIC6Q3zDlWRXyayP8fMIxWHVW3US9SEPruXd5X7zWFOGrUjlOaB8k4Zz0rYfd+t2hEwEveqEFVyUmIZrx8NwfDdJ54lOJhHQrIMpOq9uEjqpjCpi2hWiDco46uG9LY4/I4qyp38BYMzylaUwayW0egON4/GiwNmz6Ey5g6IhalarvTP6iHJIN0TPtc7NB3p32sA8fThxpvvygsvYg8=) 2025-09-19 06:37:57.353371 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEv5D8Q01WWa9mqkMAHLaTDyYmPI1ikj24zl1DEpFqVK) 2025-09-19 06:37:57.353383 | orchestrator | 2025-09-19 06:37:57.353394 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-19 06:37:57.353405 | orchestrator | Friday 19 September 2025 06:37:56 +0000 (0:00:01.018) 0:00:26.354 ****** 2025-09-19 06:37:57.353416 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 06:37:57.353428 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 06:37:57.353439 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 06:37:57.353457 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 06:37:57.353484 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 06:37:57.353496 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 06:37:57.353507 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 06:37:57.353518 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:37:57.353529 | orchestrator | 2025-09-19 06:37:57.353540 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-19 06:37:57.353552 | orchestrator | Friday 19 September 2025 06:37:56 +0000 (0:00:00.160) 0:00:26.515 ****** 2025-09-19 06:37:57.353564 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:37:57.353577 | orchestrator | 2025-09-19 06:37:57.353589 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-19 06:37:57.353601 | orchestrator | Friday 19 September 2025 06:37:56 +0000 (0:00:00.058) 0:00:26.573 ****** 2025-09-19 06:37:57.353614 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:37:57.353626 | orchestrator | 2025-09-19 06:37:57.353637 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-19 06:37:57.353648 | orchestrator | Friday 19 September 2025 06:37:56 +0000 (0:00:00.054) 0:00:26.628 ****** 2025-09-19 06:37:57.353659 | orchestrator | changed: [testbed-manager] 2025-09-19 06:37:57.353669 | orchestrator | 2025-09-19 06:37:57.353680 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:37:57.353691 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 06:37:57.353737 | orchestrator | 2025-09-19 06:37:57.353748 | orchestrator | 2025-09-19 06:37:57.353759 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:37:57.353770 | orchestrator | Friday 19 September 2025 06:37:57 +0000 (0:00:00.419) 0:00:27.048 ****** 2025-09-19 06:37:57.353781 | orchestrator | =============================================================================== 2025-09-19 06:37:57.353792 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.11s 2025-09-19 06:37:57.353803 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.06s 2025-09-19 06:37:57.353814 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-09-19 06:37:57.353825 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-09-19 06:37:57.353836 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-19 06:37:57.353847 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-19 06:37:57.353858 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 06:37:57.353868 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 06:37:57.353879 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 06:37:57.353890 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-19 06:37:57.353900 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-19 06:37:57.353911 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-19 06:37:57.353922 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-09-19 06:37:57.353933 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-09-19 06:37:57.353943 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-09-19 06:37:57.353954 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2025-09-19 06:37:57.353965 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.42s 2025-09-19 06:37:57.353975 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-09-19 06:37:57.353994 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-09-19 06:37:57.354006 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-09-19 06:37:57.533810 | orchestrator | + osism apply squid 2025-09-19 06:38:09.318278 | orchestrator | 2025-09-19 06:38:09 | INFO  | Task f768e345-956c-4008-90c4-f50e84d12404 (squid) was prepared for execution. 2025-09-19 06:38:09.318390 | orchestrator | 2025-09-19 06:38:09 | INFO  | It takes a moment until task f768e345-956c-4008-90c4-f50e84d12404 (squid) has been started and output is visible here. 2025-09-19 06:40:04.118107 | orchestrator | 2025-09-19 06:40:04.118220 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-19 06:40:04.118236 | orchestrator | 2025-09-19 06:40:04.118247 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-19 06:40:04.118258 | orchestrator | Friday 19 September 2025 06:38:12 +0000 (0:00:00.149) 0:00:00.149 ****** 2025-09-19 06:40:04.118268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 06:40:04.118280 | orchestrator | 2025-09-19 06:40:04.118308 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-19 06:40:04.118319 | orchestrator | Friday 19 September 2025 06:38:12 +0000 (0:00:00.082) 0:00:00.232 ****** 2025-09-19 06:40:04.118329 | orchestrator | ok: [testbed-manager] 2025-09-19 06:40:04.118355 | orchestrator | 2025-09-19 06:40:04.118375 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-19 06:40:04.118385 | orchestrator | Friday 19 September 2025 06:38:15 +0000 (0:00:02.206) 0:00:02.439 ****** 2025-09-19 06:40:04.118396 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-19 06:40:04.118406 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-19 06:40:04.118416 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-19 06:40:04.118427 | orchestrator | 2025-09-19 06:40:04.118437 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-19 06:40:04.118446 | orchestrator | Friday 19 September 2025 06:38:16 +0000 (0:00:01.007) 0:00:03.446 ****** 2025-09-19 06:40:04.118456 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-19 06:40:04.118466 | orchestrator | 2025-09-19 06:40:04.118476 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-19 06:40:04.118486 | orchestrator | Friday 19 September 2025 06:38:17 +0000 (0:00:00.950) 0:00:04.396 ****** 2025-09-19 06:40:04.118496 | orchestrator | ok: [testbed-manager] 2025-09-19 06:40:04.118506 | orchestrator | 2025-09-19 06:40:04.118516 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-19 06:40:04.118526 | orchestrator | Friday 19 September 2025 06:38:17 +0000 (0:00:00.347) 0:00:04.744 ****** 2025-09-19 06:40:04.118535 | orchestrator | changed: [testbed-manager] 2025-09-19 06:40:04.118545 | orchestrator | 2025-09-19 06:40:04.118555 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-19 06:40:04.118565 | orchestrator | Friday 19 September 2025 06:38:18 +0000 (0:00:00.833) 0:00:05.578 ****** 2025-09-19 06:40:04.118575 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-19 06:40:04.118586 | orchestrator | ok: [testbed-manager] 2025-09-19 06:40:04.118595 | orchestrator | 2025-09-19 06:40:04.118607 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-19 06:40:04.118619 | orchestrator | Friday 19 September 2025 06:38:51 +0000 (0:00:32.727) 0:00:38.305 ****** 2025-09-19 06:40:04.118631 | orchestrator | changed: [testbed-manager] 2025-09-19 06:40:04.118670 | orchestrator | 2025-09-19 06:40:04.118682 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-19 06:40:04.118693 | orchestrator | Friday 19 September 2025 06:39:03 +0000 (0:00:12.123) 0:00:50.429 ****** 2025-09-19 06:40:04.118705 | orchestrator | Pausing for 60 seconds 2025-09-19 06:40:04.118742 | orchestrator | changed: [testbed-manager] 2025-09-19 06:40:04.118760 | orchestrator | 2025-09-19 06:40:04.118782 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-19 06:40:04.118807 | orchestrator | Friday 19 September 2025 06:40:03 +0000 (0:01:00.066) 0:01:50.495 ****** 2025-09-19 06:40:04.118825 | orchestrator | ok: [testbed-manager] 2025-09-19 06:40:04.118842 | orchestrator | 2025-09-19 06:40:04.118859 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-19 06:40:04.118875 | orchestrator | Friday 19 September 2025 06:40:03 +0000 (0:00:00.090) 0:01:50.586 ****** 2025-09-19 06:40:04.118890 | orchestrator | changed: [testbed-manager] 2025-09-19 06:40:04.118907 | orchestrator | 2025-09-19 06:40:04.118924 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:40:04.118941 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:40:04.118959 | orchestrator | 2025-09-19 06:40:04.118976 | orchestrator | 2025-09-19 06:40:04.118994 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:40:04.119012 | orchestrator | Friday 19 September 2025 06:40:03 +0000 (0:00:00.598) 0:01:51.185 ****** 2025-09-19 06:40:04.119022 | orchestrator | =============================================================================== 2025-09-19 06:40:04.119032 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-19 06:40:04.119042 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.73s 2025-09-19 06:40:04.119051 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.12s 2025-09-19 06:40:04.119061 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.21s 2025-09-19 06:40:04.119071 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.01s 2025-09-19 06:40:04.119080 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.95s 2025-09-19 06:40:04.119090 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.83s 2025-09-19 06:40:04.119100 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-09-19 06:40:04.119109 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-09-19 06:40:04.119119 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.09s 2025-09-19 06:40:04.119128 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-09-19 06:40:04.289142 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-19 06:40:04.289237 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-09-19 06:40:04.293759 | orchestrator | ++ semver 9.2.0 9.0.0 2025-09-19 06:40:04.351870 | orchestrator | + [[ 1 -lt 0 ]] 2025-09-19 06:40:04.352071 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-19 06:40:16.117719 | orchestrator | 2025-09-19 06:40:16 | INFO  | Task 16430987-47e8-4902-9f2b-3bdbb999b02e (operator) was prepared for execution. 2025-09-19 06:40:16.117829 | orchestrator | 2025-09-19 06:40:16 | INFO  | It takes a moment until task 16430987-47e8-4902-9f2b-3bdbb999b02e (operator) has been started and output is visible here. 2025-09-19 06:40:31.761533 | orchestrator | 2025-09-19 06:40:31.761709 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-19 06:40:31.761727 | orchestrator | 2025-09-19 06:40:31.761740 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:40:31.761753 | orchestrator | Friday 19 September 2025 06:40:19 +0000 (0:00:00.132) 0:00:00.132 ****** 2025-09-19 06:40:31.761764 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:40:31.761777 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:40:31.761788 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:40:31.761799 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:40:31.761810 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:40:31.761843 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:40:31.761854 | orchestrator | 2025-09-19 06:40:31.761865 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-19 06:40:31.761876 | orchestrator | Friday 19 September 2025 06:40:23 +0000 (0:00:04.247) 0:00:04.379 ****** 2025-09-19 06:40:31.761887 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:40:31.761898 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:40:31.761909 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:40:31.761920 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:40:31.761930 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:40:31.761941 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:40:31.761951 | orchestrator | 2025-09-19 06:40:31.761962 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-19 06:40:31.761973 | orchestrator | 2025-09-19 06:40:31.761984 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 06:40:31.761995 | orchestrator | Friday 19 September 2025 06:40:24 +0000 (0:00:00.709) 0:00:05.089 ****** 2025-09-19 06:40:31.762006 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:40:31.762070 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:40:31.762084 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:40:31.762097 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:40:31.762109 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:40:31.762121 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:40:31.762133 | orchestrator | 2025-09-19 06:40:31.762145 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 06:40:31.762158 | orchestrator | Friday 19 September 2025 06:40:24 +0000 (0:00:00.142) 0:00:05.231 ****** 2025-09-19 06:40:31.762171 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:40:31.762183 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:40:31.762196 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:40:31.762209 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:40:31.762221 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:40:31.762233 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:40:31.762245 | orchestrator | 2025-09-19 06:40:31.762258 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 06:40:31.762269 | orchestrator | Friday 19 September 2025 06:40:24 +0000 (0:00:00.149) 0:00:05.380 ****** 2025-09-19 06:40:31.762282 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:31.762296 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:31.762308 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:31.762320 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:31.762332 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:31.762344 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:31.762356 | orchestrator | 2025-09-19 06:40:31.762368 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 06:40:31.762380 | orchestrator | Friday 19 September 2025 06:40:25 +0000 (0:00:00.559) 0:00:05.939 ****** 2025-09-19 06:40:31.762393 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:31.762404 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:31.762416 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:31.762429 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:31.762442 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:31.762453 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:31.762464 | orchestrator | 2025-09-19 06:40:31.762475 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 06:40:31.762485 | orchestrator | Friday 19 September 2025 06:40:26 +0000 (0:00:00.724) 0:00:06.664 ****** 2025-09-19 06:40:31.762496 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-19 06:40:31.762508 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-19 06:40:31.762519 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-19 06:40:31.762530 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-19 06:40:31.762540 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-19 06:40:31.762551 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-19 06:40:31.762570 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-19 06:40:31.762581 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-19 06:40:31.762592 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-19 06:40:31.762603 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-19 06:40:31.762614 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-19 06:40:31.762660 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-19 06:40:31.762674 | orchestrator | 2025-09-19 06:40:31.762689 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 06:40:31.762701 | orchestrator | Friday 19 September 2025 06:40:27 +0000 (0:00:01.105) 0:00:07.770 ****** 2025-09-19 06:40:31.762712 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:31.762723 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:31.762734 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:31.762745 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:31.762756 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:31.762767 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:31.762778 | orchestrator | 2025-09-19 06:40:31.762789 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 06:40:31.762800 | orchestrator | Friday 19 September 2025 06:40:28 +0000 (0:00:01.167) 0:00:08.937 ****** 2025-09-19 06:40:31.762816 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-19 06:40:31.762827 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-19 06:40:31.762838 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-19 06:40:31.762850 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:31.762879 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:31.762892 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:31.762903 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:31.762914 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:31.762925 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:31.762936 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:31.762946 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:31.762957 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:31.762968 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:31.762979 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:31.762989 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:31.763000 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:31.763011 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:31.763021 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:31.763032 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:31.763043 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:31.763054 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:31.763065 | orchestrator | 2025-09-19 06:40:31.763076 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 06:40:31.763088 | orchestrator | Friday 19 September 2025 06:40:29 +0000 (0:00:01.248) 0:00:10.186 ****** 2025-09-19 06:40:31.763099 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:31.763109 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:31.763120 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:31.763131 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:31.763142 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:31.763160 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:31.763171 | orchestrator | 2025-09-19 06:40:31.763182 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 06:40:31.763193 | orchestrator | Friday 19 September 2025 06:40:29 +0000 (0:00:00.158) 0:00:10.344 ****** 2025-09-19 06:40:31.763203 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:31.763214 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:31.763225 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:31.763236 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:31.763246 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:31.763257 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:31.763268 | orchestrator | 2025-09-19 06:40:31.763279 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 06:40:31.763290 | orchestrator | Friday 19 September 2025 06:40:30 +0000 (0:00:00.558) 0:00:10.903 ****** 2025-09-19 06:40:31.763301 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:31.763311 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:31.763322 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:31.763333 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:31.763344 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:31.763354 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:31.763365 | orchestrator | 2025-09-19 06:40:31.763376 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 06:40:31.763387 | orchestrator | Friday 19 September 2025 06:40:30 +0000 (0:00:00.175) 0:00:11.078 ****** 2025-09-19 06:40:31.763398 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 06:40:31.763409 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:31.763419 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 06:40:31.763430 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 06:40:31.763441 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 06:40:31.763452 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 06:40:31.763463 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:31.763474 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:31.763484 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:31.763495 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:31.763506 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 06:40:31.763517 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:31.763527 | orchestrator | 2025-09-19 06:40:31.763538 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 06:40:31.763549 | orchestrator | Friday 19 September 2025 06:40:31 +0000 (0:00:00.668) 0:00:11.747 ****** 2025-09-19 06:40:31.763560 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:31.763571 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:31.763581 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:31.763592 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:31.763603 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:31.763613 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:31.763624 | orchestrator | 2025-09-19 06:40:31.763653 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 06:40:31.763664 | orchestrator | Friday 19 September 2025 06:40:31 +0000 (0:00:00.164) 0:00:11.911 ****** 2025-09-19 06:40:31.763675 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:31.763686 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:31.763697 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:31.763707 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:31.763718 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:31.763729 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:31.763739 | orchestrator | 2025-09-19 06:40:31.763750 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 06:40:31.763761 | orchestrator | Friday 19 September 2025 06:40:31 +0000 (0:00:00.165) 0:00:12.077 ****** 2025-09-19 06:40:31.763772 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:31.763789 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:31.763800 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:31.763811 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:31.763828 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:32.860679 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:32.860776 | orchestrator | 2025-09-19 06:40:32.860789 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 06:40:32.860801 | orchestrator | Friday 19 September 2025 06:40:31 +0000 (0:00:00.161) 0:00:12.238 ****** 2025-09-19 06:40:32.860811 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:32.860820 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:32.860830 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:32.860840 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:32.860850 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:32.860859 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:32.860869 | orchestrator | 2025-09-19 06:40:32.860878 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 06:40:32.860888 | orchestrator | Friday 19 September 2025 06:40:32 +0000 (0:00:00.642) 0:00:12.880 ****** 2025-09-19 06:40:32.860898 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:32.860907 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:32.860917 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:32.860926 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:32.860936 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:32.860945 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:32.860955 | orchestrator | 2025-09-19 06:40:32.860964 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:40:32.860975 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:32.860986 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:32.860996 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:32.861006 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:32.861015 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:32.861025 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:32.861034 | orchestrator | 2025-09-19 06:40:32.861044 | orchestrator | 2025-09-19 06:40:32.861054 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:40:32.861063 | orchestrator | Friday 19 September 2025 06:40:32 +0000 (0:00:00.231) 0:00:13.112 ****** 2025-09-19 06:40:32.861073 | orchestrator | =============================================================================== 2025-09-19 06:40:32.861083 | orchestrator | Gathering Facts --------------------------------------------------------- 4.25s 2025-09-19 06:40:32.861093 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2025-09-19 06:40:32.861103 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.17s 2025-09-19 06:40:32.861112 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.11s 2025-09-19 06:40:32.861122 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.72s 2025-09-19 06:40:32.861131 | orchestrator | Do not require tty for all users ---------------------------------------- 0.71s 2025-09-19 06:40:32.861141 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.67s 2025-09-19 06:40:32.861173 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-09-19 06:40:32.861183 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.56s 2025-09-19 06:40:32.861193 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-09-19 06:40:32.861202 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-09-19 06:40:32.861227 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-09-19 06:40:32.861238 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-09-19 06:40:32.861247 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-09-19 06:40:32.861257 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-09-19 06:40:32.861267 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2025-09-19 06:40:32.861276 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-09-19 06:40:32.861286 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2025-09-19 06:40:33.186716 | orchestrator | + osism apply --environment custom facts 2025-09-19 06:40:35.003196 | orchestrator | 2025-09-19 06:40:35 | INFO  | Trying to run play facts in environment custom 2025-09-19 06:40:45.193359 | orchestrator | 2025-09-19 06:40:45 | INFO  | Task 2675f29d-3691-4547-b4e7-7c54ddf9579d (facts) was prepared for execution. 2025-09-19 06:40:45.193469 | orchestrator | 2025-09-19 06:40:45 | INFO  | It takes a moment until task 2675f29d-3691-4547-b4e7-7c54ddf9579d (facts) has been started and output is visible here. 2025-09-19 06:41:27.924770 | orchestrator | 2025-09-19 06:41:27.924889 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-19 06:41:27.924905 | orchestrator | 2025-09-19 06:41:27.924917 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 06:41:27.924929 | orchestrator | Friday 19 September 2025 06:40:49 +0000 (0:00:00.088) 0:00:00.088 ****** 2025-09-19 06:41:27.924941 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:27.924953 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:41:27.924964 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:27.924975 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:41:27.924987 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:27.924998 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:27.925009 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:41:27.925020 | orchestrator | 2025-09-19 06:41:27.925031 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-19 06:41:27.925042 | orchestrator | Friday 19 September 2025 06:40:50 +0000 (0:00:01.438) 0:00:01.527 ****** 2025-09-19 06:41:27.925053 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:27.925064 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:27.925075 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:41:27.925086 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:27.925097 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:41:27.925107 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:27.925118 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:41:27.925129 | orchestrator | 2025-09-19 06:41:27.925140 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-19 06:41:27.925151 | orchestrator | 2025-09-19 06:41:27.925162 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 06:41:27.925173 | orchestrator | Friday 19 September 2025 06:40:51 +0000 (0:00:01.209) 0:00:02.737 ****** 2025-09-19 06:41:27.925184 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:27.925195 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:27.925206 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:27.925217 | orchestrator | 2025-09-19 06:41:27.925228 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 06:41:27.925240 | orchestrator | Friday 19 September 2025 06:40:51 +0000 (0:00:00.097) 0:00:02.834 ****** 2025-09-19 06:41:27.925273 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:27.925287 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:27.925300 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:27.925312 | orchestrator | 2025-09-19 06:41:27.925325 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 06:41:27.925337 | orchestrator | Friday 19 September 2025 06:40:52 +0000 (0:00:00.211) 0:00:03.045 ****** 2025-09-19 06:41:27.925350 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:27.925362 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:27.925375 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:27.925387 | orchestrator | 2025-09-19 06:41:27.925400 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 06:41:27.925413 | orchestrator | Friday 19 September 2025 06:40:52 +0000 (0:00:00.201) 0:00:03.247 ****** 2025-09-19 06:41:27.925426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:41:27.925440 | orchestrator | 2025-09-19 06:41:27.925453 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 06:41:27.925466 | orchestrator | Friday 19 September 2025 06:40:52 +0000 (0:00:00.168) 0:00:03.416 ****** 2025-09-19 06:41:27.925478 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:27.925491 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:27.925503 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:27.925516 | orchestrator | 2025-09-19 06:41:27.925528 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 06:41:27.925540 | orchestrator | Friday 19 September 2025 06:40:52 +0000 (0:00:00.471) 0:00:03.887 ****** 2025-09-19 06:41:27.925553 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:41:27.925566 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:41:27.925578 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:41:27.925591 | orchestrator | 2025-09-19 06:41:27.925635 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 06:41:27.925647 | orchestrator | Friday 19 September 2025 06:40:53 +0000 (0:00:00.117) 0:00:04.004 ****** 2025-09-19 06:41:27.925658 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:27.925669 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:27.925680 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:27.925690 | orchestrator | 2025-09-19 06:41:27.925701 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 06:41:27.925712 | orchestrator | Friday 19 September 2025 06:40:54 +0000 (0:00:01.061) 0:00:05.066 ****** 2025-09-19 06:41:27.925724 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:27.925734 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:27.925745 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:27.925756 | orchestrator | 2025-09-19 06:41:27.925766 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 06:41:27.925777 | orchestrator | Friday 19 September 2025 06:40:54 +0000 (0:00:00.456) 0:00:05.522 ****** 2025-09-19 06:41:27.925788 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:27.925799 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:27.925810 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:27.925821 | orchestrator | 2025-09-19 06:41:27.925831 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 06:41:27.925842 | orchestrator | Friday 19 September 2025 06:40:55 +0000 (0:00:01.068) 0:00:06.591 ****** 2025-09-19 06:41:27.925853 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:27.925864 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:27.925874 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:27.925885 | orchestrator | 2025-09-19 06:41:27.925911 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-19 06:41:27.925922 | orchestrator | Friday 19 September 2025 06:41:12 +0000 (0:00:16.509) 0:00:23.100 ****** 2025-09-19 06:41:27.925933 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:41:27.925951 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:41:27.925962 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:41:27.925973 | orchestrator | 2025-09-19 06:41:27.925984 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-19 06:41:27.926012 | orchestrator | Friday 19 September 2025 06:41:12 +0000 (0:00:00.103) 0:00:23.204 ****** 2025-09-19 06:41:27.926093 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:27.926105 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:27.926116 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:27.926127 | orchestrator | 2025-09-19 06:41:27.926138 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 06:41:27.926149 | orchestrator | Friday 19 September 2025 06:41:18 +0000 (0:00:06.556) 0:00:29.761 ****** 2025-09-19 06:41:27.926160 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:27.926171 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:27.926182 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:27.926193 | orchestrator | 2025-09-19 06:41:27.926204 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 06:41:27.926215 | orchestrator | Friday 19 September 2025 06:41:19 +0000 (0:00:00.473) 0:00:30.234 ****** 2025-09-19 06:41:27.926226 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-19 06:41:27.926237 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-19 06:41:27.926248 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-19 06:41:27.926259 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-19 06:41:27.926270 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-19 06:41:27.926281 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-19 06:41:27.926292 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-19 06:41:27.926303 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-19 06:41:27.926314 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-19 06:41:27.926325 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-19 06:41:27.926336 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-19 06:41:27.926347 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-19 06:41:27.926358 | orchestrator | 2025-09-19 06:41:27.926369 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 06:41:27.926380 | orchestrator | Friday 19 September 2025 06:41:22 +0000 (0:00:03.482) 0:00:33.717 ****** 2025-09-19 06:41:27.926391 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:27.926402 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:27.926413 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:27.926424 | orchestrator | 2025-09-19 06:41:27.926435 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 06:41:27.926446 | orchestrator | 2025-09-19 06:41:27.926457 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:41:27.926468 | orchestrator | Friday 19 September 2025 06:41:24 +0000 (0:00:01.335) 0:00:35.052 ****** 2025-09-19 06:41:27.926479 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:41:27.926490 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:41:27.926501 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:41:27.926512 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:27.926523 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:27.926534 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:27.926545 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:27.926556 | orchestrator | 2025-09-19 06:41:27.926567 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:41:27.926579 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:41:27.926590 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:41:27.926630 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:41:27.926642 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:41:27.926653 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:41:27.926664 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:41:27.926675 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:41:27.926686 | orchestrator | 2025-09-19 06:41:27.926696 | orchestrator | 2025-09-19 06:41:27.926707 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:41:27.926718 | orchestrator | Friday 19 September 2025 06:41:27 +0000 (0:00:03.796) 0:00:38.849 ****** 2025-09-19 06:41:27.926729 | orchestrator | =============================================================================== 2025-09-19 06:41:27.926740 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.51s 2025-09-19 06:41:27.926750 | orchestrator | Install required packages (Debian) -------------------------------------- 6.56s 2025-09-19 06:41:27.926761 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.80s 2025-09-19 06:41:27.926772 | orchestrator | Copy fact files --------------------------------------------------------- 3.48s 2025-09-19 06:41:27.926783 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2025-09-19 06:41:27.926794 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.34s 2025-09-19 06:41:27.926849 | orchestrator | Copy fact file ---------------------------------------------------------- 1.21s 2025-09-19 06:41:28.129549 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-09-19 06:41:28.129717 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2025-09-19 06:41:28.129742 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2025-09-19 06:41:28.129762 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2025-09-19 06:41:28.129781 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-09-19 06:41:28.129801 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-09-19 06:41:28.129821 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-09-19 06:41:28.129833 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-09-19 06:41:28.129845 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-09-19 06:41:28.129856 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-09-19 06:41:28.129866 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-09-19 06:41:28.404442 | orchestrator | + osism apply bootstrap 2025-09-19 06:41:40.364723 | orchestrator | 2025-09-19 06:41:40 | INFO  | Task ed1a1c99-feee-4555-9079-2cb47a1a21b4 (bootstrap) was prepared for execution. 2025-09-19 06:41:40.364837 | orchestrator | 2025-09-19 06:41:40 | INFO  | It takes a moment until task ed1a1c99-feee-4555-9079-2cb47a1a21b4 (bootstrap) has been started and output is visible here. 2025-09-19 06:41:56.372477 | orchestrator | 2025-09-19 06:41:56.372582 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-19 06:41:56.372621 | orchestrator | 2025-09-19 06:41:56.372631 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-19 06:41:56.372662 | orchestrator | Friday 19 September 2025 06:41:44 +0000 (0:00:00.148) 0:00:00.148 ****** 2025-09-19 06:41:56.372672 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:56.372682 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:41:56.372691 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:41:56.372700 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:41:56.372709 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:56.372718 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:56.372726 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:56.372735 | orchestrator | 2025-09-19 06:41:56.372744 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 06:41:56.372753 | orchestrator | 2025-09-19 06:41:56.372762 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:41:56.372771 | orchestrator | Friday 19 September 2025 06:41:44 +0000 (0:00:00.213) 0:00:00.361 ****** 2025-09-19 06:41:56.372780 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:41:56.372789 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:41:56.372797 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:41:56.372806 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:56.372815 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:56.372823 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:56.372832 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:56.372841 | orchestrator | 2025-09-19 06:41:56.372849 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-19 06:41:56.372858 | orchestrator | 2025-09-19 06:41:56.372867 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:41:56.372876 | orchestrator | Friday 19 September 2025 06:41:48 +0000 (0:00:04.550) 0:00:04.912 ****** 2025-09-19 06:41:56.372886 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 06:41:56.372895 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 06:41:56.372904 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-19 06:41:56.372913 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 06:41:56.372921 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 06:41:56.372930 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 06:41:56.372939 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 06:41:56.372948 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 06:41:56.372956 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 06:41:56.372965 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 06:41:56.372974 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-19 06:41:56.372983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 06:41:56.372992 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-19 06:41:56.373000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 06:41:56.373009 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:41:56.373018 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 06:41:56.373027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-19 06:41:56.373036 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-19 06:41:56.373045 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 06:41:56.373053 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 06:41:56.373062 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:41:56.373083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 06:41:56.373093 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 06:41:56.373102 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 06:41:56.373110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 06:41:56.373126 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-19 06:41:56.373135 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 06:41:56.373143 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 06:41:56.373152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 06:41:56.373160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 06:41:56.373169 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 06:41:56.373178 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 06:41:56.373187 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 06:41:56.373196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 06:41:56.373204 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 06:41:56.373213 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-19 06:41:56.373222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 06:41:56.373230 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 06:41:56.373239 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 06:41:56.373247 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-19 06:41:56.373256 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-19 06:41:56.373265 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-19 06:41:56.373273 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-19 06:41:56.373282 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:41:56.373291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 06:41:56.373299 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:41:56.373323 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-19 06:41:56.373333 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-19 06:41:56.373342 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-19 06:41:56.373350 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-19 06:41:56.373359 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-19 06:41:56.373367 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:41:56.373376 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-19 06:41:56.373385 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:41:56.373393 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-19 06:41:56.373402 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:41:56.373411 | orchestrator | 2025-09-19 06:41:56.373420 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-19 06:41:56.373428 | orchestrator | 2025-09-19 06:41:56.373437 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-19 06:41:56.373446 | orchestrator | Friday 19 September 2025 06:41:49 +0000 (0:00:00.371) 0:00:05.284 ****** 2025-09-19 06:41:56.373454 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:56.373463 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:41:56.373472 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:41:56.373480 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:56.373489 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:56.373498 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:56.373506 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:41:56.373515 | orchestrator | 2025-09-19 06:41:56.373524 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-19 06:41:56.373532 | orchestrator | Friday 19 September 2025 06:41:50 +0000 (0:00:01.079) 0:00:06.364 ****** 2025-09-19 06:41:56.373541 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:41:56.373550 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:41:56.373558 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:56.373566 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:56.373575 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:56.373589 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:41:56.373625 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:56.373634 | orchestrator | 2025-09-19 06:41:56.373643 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-19 06:41:56.373651 | orchestrator | Friday 19 September 2025 06:41:52 +0000 (0:00:01.858) 0:00:08.223 ****** 2025-09-19 06:41:56.373661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:41:56.373672 | orchestrator | 2025-09-19 06:41:56.373681 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-19 06:41:56.373690 | orchestrator | Friday 19 September 2025 06:41:52 +0000 (0:00:00.214) 0:00:08.437 ****** 2025-09-19 06:41:56.373699 | orchestrator | changed: [testbed-manager] 2025-09-19 06:41:56.373708 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:41:56.373717 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:56.373726 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:41:56.373734 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:56.373743 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:56.373752 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:41:56.373761 | orchestrator | 2025-09-19 06:41:56.373770 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-19 06:41:56.373778 | orchestrator | Friday 19 September 2025 06:41:54 +0000 (0:00:01.834) 0:00:10.271 ****** 2025-09-19 06:41:56.373787 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:41:56.373797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:41:56.373807 | orchestrator | 2025-09-19 06:41:56.373816 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-19 06:41:56.373825 | orchestrator | Friday 19 September 2025 06:41:54 +0000 (0:00:00.248) 0:00:10.520 ****** 2025-09-19 06:41:56.373833 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:41:56.373842 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:41:56.373851 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:56.373860 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:56.373868 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:41:56.373877 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:56.373886 | orchestrator | 2025-09-19 06:41:56.373895 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-19 06:41:56.373903 | orchestrator | Friday 19 September 2025 06:41:55 +0000 (0:00:00.937) 0:00:11.458 ****** 2025-09-19 06:41:56.373912 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:41:56.373921 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:41:56.373930 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:56.373938 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:56.373947 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:41:56.373956 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:41:56.373965 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:56.373973 | orchestrator | 2025-09-19 06:41:56.373982 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-19 06:41:56.373991 | orchestrator | Friday 19 September 2025 06:41:55 +0000 (0:00:00.523) 0:00:11.982 ****** 2025-09-19 06:41:56.374000 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:41:56.374009 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:41:56.374064 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:41:56.374073 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:41:56.374081 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:41:56.374090 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:41:56.374099 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:56.374107 | orchestrator | 2025-09-19 06:41:56.374117 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 06:41:56.374132 | orchestrator | Friday 19 September 2025 06:41:56 +0000 (0:00:00.380) 0:00:12.362 ****** 2025-09-19 06:41:56.374141 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:41:56.374150 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:41:56.374165 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:06.986733 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:06.986850 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:06.986867 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:06.986879 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:06.986891 | orchestrator | 2025-09-19 06:42:06.986904 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 06:42:06.986917 | orchestrator | Friday 19 September 2025 06:41:56 +0000 (0:00:00.196) 0:00:12.558 ****** 2025-09-19 06:42:06.986929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:42:06.986957 | orchestrator | 2025-09-19 06:42:06.986969 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 06:42:06.986981 | orchestrator | Friday 19 September 2025 06:41:56 +0000 (0:00:00.246) 0:00:12.805 ****** 2025-09-19 06:42:06.987022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:42:06.987046 | orchestrator | 2025-09-19 06:42:06.987057 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 06:42:06.987069 | orchestrator | Friday 19 September 2025 06:41:56 +0000 (0:00:00.255) 0:00:13.060 ****** 2025-09-19 06:42:06.987080 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.987093 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:06.987105 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:06.987116 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:06.987128 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:06.987138 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:06.987149 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:06.987160 | orchestrator | 2025-09-19 06:42:06.987171 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 06:42:06.987183 | orchestrator | Friday 19 September 2025 06:41:58 +0000 (0:00:01.255) 0:00:14.316 ****** 2025-09-19 06:42:06.987197 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:06.987210 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:42:06.987223 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:06.987235 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:06.987248 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:06.987260 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:06.987273 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:06.987286 | orchestrator | 2025-09-19 06:42:06.987298 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 06:42:06.987311 | orchestrator | Friday 19 September 2025 06:41:58 +0000 (0:00:00.168) 0:00:14.485 ****** 2025-09-19 06:42:06.987324 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.987336 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:06.987349 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:06.987361 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:06.987374 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:06.987386 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:06.987398 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:06.987410 | orchestrator | 2025-09-19 06:42:06.987423 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 06:42:06.987437 | orchestrator | Friday 19 September 2025 06:41:58 +0000 (0:00:00.481) 0:00:14.967 ****** 2025-09-19 06:42:06.987450 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:06.987484 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:42:06.987501 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:06.987512 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:06.987523 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:06.987533 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:06.987544 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:06.987555 | orchestrator | 2025-09-19 06:42:06.987566 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 06:42:06.987578 | orchestrator | Friday 19 September 2025 06:41:59 +0000 (0:00:00.212) 0:00:15.179 ****** 2025-09-19 06:42:06.987609 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.987621 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:06.987632 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:06.987643 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:06.987654 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:06.987664 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:06.987675 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:06.987686 | orchestrator | 2025-09-19 06:42:06.987697 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 06:42:06.987708 | orchestrator | Friday 19 September 2025 06:41:59 +0000 (0:00:00.474) 0:00:15.654 ****** 2025-09-19 06:42:06.987719 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.987729 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:06.987740 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:06.987752 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:06.987762 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:06.987773 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:06.987784 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:06.987795 | orchestrator | 2025-09-19 06:42:06.987806 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 06:42:06.987817 | orchestrator | Friday 19 September 2025 06:42:00 +0000 (0:00:01.033) 0:00:16.688 ****** 2025-09-19 06:42:06.987827 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.987838 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:06.987849 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:06.987860 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:06.987870 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:06.987881 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:06.987892 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:06.987903 | orchestrator | 2025-09-19 06:42:06.987914 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 06:42:06.987925 | orchestrator | Friday 19 September 2025 06:42:01 +0000 (0:00:01.041) 0:00:17.730 ****** 2025-09-19 06:42:06.987955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:42:06.987968 | orchestrator | 2025-09-19 06:42:06.987979 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 06:42:06.987990 | orchestrator | Friday 19 September 2025 06:42:01 +0000 (0:00:00.316) 0:00:18.046 ****** 2025-09-19 06:42:06.988001 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:06.988012 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:06.988023 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:06.988034 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:06.988044 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:06.988055 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:06.988066 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:06.988077 | orchestrator | 2025-09-19 06:42:06.988088 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 06:42:06.988099 | orchestrator | Friday 19 September 2025 06:42:03 +0000 (0:00:01.166) 0:00:19.213 ****** 2025-09-19 06:42:06.988110 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.988129 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:06.988140 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:06.988151 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:06.988162 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:06.988172 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:06.988183 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:06.988194 | orchestrator | 2025-09-19 06:42:06.988205 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 06:42:06.988216 | orchestrator | Friday 19 September 2025 06:42:03 +0000 (0:00:00.217) 0:00:19.431 ****** 2025-09-19 06:42:06.988227 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.988238 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:06.988249 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:06.988259 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:06.988270 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:06.988281 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:06.988292 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:06.988302 | orchestrator | 2025-09-19 06:42:06.988313 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 06:42:06.988325 | orchestrator | Friday 19 September 2025 06:42:03 +0000 (0:00:00.180) 0:00:19.612 ****** 2025-09-19 06:42:06.988335 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.988346 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:06.988357 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:06.988368 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:06.988378 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:06.988389 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:06.988400 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:06.988411 | orchestrator | 2025-09-19 06:42:06.988422 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 06:42:06.988433 | orchestrator | Friday 19 September 2025 06:42:03 +0000 (0:00:00.199) 0:00:19.811 ****** 2025-09-19 06:42:06.988444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:42:06.988457 | orchestrator | 2025-09-19 06:42:06.988468 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 06:42:06.988479 | orchestrator | Friday 19 September 2025 06:42:03 +0000 (0:00:00.246) 0:00:20.058 ****** 2025-09-19 06:42:06.988490 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.988501 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:06.988512 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:06.988528 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:06.988539 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:06.988550 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:06.988561 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:06.988572 | orchestrator | 2025-09-19 06:42:06.988583 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 06:42:06.988613 | orchestrator | Friday 19 September 2025 06:42:04 +0000 (0:00:00.491) 0:00:20.550 ****** 2025-09-19 06:42:06.988625 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:06.988636 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:42:06.988647 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:06.988658 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:06.988668 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:06.988679 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:06.988690 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:06.988701 | orchestrator | 2025-09-19 06:42:06.988712 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 06:42:06.988723 | orchestrator | Friday 19 September 2025 06:42:04 +0000 (0:00:00.197) 0:00:20.748 ****** 2025-09-19 06:42:06.988734 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.988745 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:06.988756 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:06.988773 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:06.988784 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:06.988795 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:06.988806 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:06.988816 | orchestrator | 2025-09-19 06:42:06.988827 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 06:42:06.988838 | orchestrator | Friday 19 September 2025 06:42:05 +0000 (0:00:00.919) 0:00:21.668 ****** 2025-09-19 06:42:06.988849 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.988860 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:06.988870 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:06.988881 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:06.988892 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:06.988903 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:06.988913 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:06.988924 | orchestrator | 2025-09-19 06:42:06.988935 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 06:42:06.988946 | orchestrator | Friday 19 September 2025 06:42:06 +0000 (0:00:00.489) 0:00:22.157 ****** 2025-09-19 06:42:06.988957 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:06.988968 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:06.988979 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:06.988990 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:06.989007 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.869420 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:46.869525 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:46.869539 | orchestrator | 2025-09-19 06:42:46.869550 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 06:42:46.869561 | orchestrator | Friday 19 September 2025 06:42:06 +0000 (0:00:00.951) 0:00:23.109 ****** 2025-09-19 06:42:46.869571 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.869656 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.869680 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.869698 | orchestrator | changed: [testbed-manager] 2025-09-19 06:42:46.869715 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:46.869732 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:46.869749 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:46.869768 | orchestrator | 2025-09-19 06:42:46.869786 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-19 06:42:46.869806 | orchestrator | Friday 19 September 2025 06:42:24 +0000 (0:00:17.375) 0:00:40.485 ****** 2025-09-19 06:42:46.869823 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.869834 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.869846 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.869857 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.869868 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.869879 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.869890 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.869901 | orchestrator | 2025-09-19 06:42:46.869912 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-19 06:42:46.869923 | orchestrator | Friday 19 September 2025 06:42:24 +0000 (0:00:00.241) 0:00:40.727 ****** 2025-09-19 06:42:46.869934 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.869945 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.869956 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.869969 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.869982 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.869995 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.870007 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.870090 | orchestrator | 2025-09-19 06:42:46.870104 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-19 06:42:46.870116 | orchestrator | Friday 19 September 2025 06:42:24 +0000 (0:00:00.240) 0:00:40.967 ****** 2025-09-19 06:42:46.870129 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.870141 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.870153 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.870203 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.870215 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.870228 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.870240 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.870252 | orchestrator | 2025-09-19 06:42:46.870264 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-19 06:42:46.870277 | orchestrator | Friday 19 September 2025 06:42:25 +0000 (0:00:00.233) 0:00:41.201 ****** 2025-09-19 06:42:46.870292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:42:46.870307 | orchestrator | 2025-09-19 06:42:46.870320 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-19 06:42:46.870332 | orchestrator | Friday 19 September 2025 06:42:25 +0000 (0:00:00.283) 0:00:41.485 ****** 2025-09-19 06:42:46.870342 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.870353 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.870364 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.870375 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.870386 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.870396 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.870407 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.870418 | orchestrator | 2025-09-19 06:42:46.870429 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-19 06:42:46.870440 | orchestrator | Friday 19 September 2025 06:42:26 +0000 (0:00:01.643) 0:00:43.128 ****** 2025-09-19 06:42:46.870451 | orchestrator | changed: [testbed-manager] 2025-09-19 06:42:46.870462 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:46.870473 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:46.870484 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:46.870495 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:46.870506 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:46.870516 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:46.870527 | orchestrator | 2025-09-19 06:42:46.870538 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-19 06:42:46.870549 | orchestrator | Friday 19 September 2025 06:42:28 +0000 (0:00:01.030) 0:00:44.158 ****** 2025-09-19 06:42:46.870559 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.870570 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.870615 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.870634 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.870652 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.870669 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.870688 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.870707 | orchestrator | 2025-09-19 06:42:46.870726 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-19 06:42:46.870738 | orchestrator | Friday 19 September 2025 06:42:28 +0000 (0:00:00.789) 0:00:44.947 ****** 2025-09-19 06:42:46.870750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:42:46.870763 | orchestrator | 2025-09-19 06:42:46.870774 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-19 06:42:46.870785 | orchestrator | Friday 19 September 2025 06:42:29 +0000 (0:00:00.299) 0:00:45.247 ****** 2025-09-19 06:42:46.870796 | orchestrator | changed: [testbed-manager] 2025-09-19 06:42:46.870807 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:46.870818 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:46.870829 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:46.870839 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:46.870850 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:46.870861 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:46.870872 | orchestrator | 2025-09-19 06:42:46.870912 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-19 06:42:46.870924 | orchestrator | Friday 19 September 2025 06:42:30 +0000 (0:00:00.992) 0:00:46.240 ****** 2025-09-19 06:42:46.870935 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:46.870946 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:42:46.870957 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:46.870985 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:46.870997 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:46.871007 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:46.871018 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:46.871029 | orchestrator | 2025-09-19 06:42:46.871040 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-19 06:42:46.871051 | orchestrator | Friday 19 September 2025 06:42:30 +0000 (0:00:00.280) 0:00:46.520 ****** 2025-09-19 06:42:46.871062 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:46.871073 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:46.871083 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:46.871094 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:46.871105 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:46.871116 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:46.871126 | orchestrator | changed: [testbed-manager] 2025-09-19 06:42:46.871137 | orchestrator | 2025-09-19 06:42:46.871148 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-19 06:42:46.871159 | orchestrator | Friday 19 September 2025 06:42:42 +0000 (0:00:12.265) 0:00:58.786 ****** 2025-09-19 06:42:46.871170 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.871181 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.871192 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.871202 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.871213 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.871224 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.871235 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.871245 | orchestrator | 2025-09-19 06:42:46.871256 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-19 06:42:46.871267 | orchestrator | Friday 19 September 2025 06:42:43 +0000 (0:00:00.982) 0:00:59.768 ****** 2025-09-19 06:42:46.871278 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.871289 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.871300 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.871310 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.871321 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.871332 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.871342 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.871353 | orchestrator | 2025-09-19 06:42:46.871364 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-19 06:42:46.871375 | orchestrator | Friday 19 September 2025 06:42:44 +0000 (0:00:00.756) 0:01:00.524 ****** 2025-09-19 06:42:46.871386 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.871397 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.871407 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.871418 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.871429 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.871440 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.871450 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.871461 | orchestrator | 2025-09-19 06:42:46.871472 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-19 06:42:46.871484 | orchestrator | Friday 19 September 2025 06:42:44 +0000 (0:00:00.172) 0:01:00.697 ****** 2025-09-19 06:42:46.871495 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.871505 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.871516 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.871527 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.871538 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.871548 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.871559 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.871601 | orchestrator | 2025-09-19 06:42:46.871627 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-19 06:42:46.871645 | orchestrator | Friday 19 September 2025 06:42:44 +0000 (0:00:00.166) 0:01:00.864 ****** 2025-09-19 06:42:46.871663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:42:46.871684 | orchestrator | 2025-09-19 06:42:46.871702 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-19 06:42:46.871721 | orchestrator | Friday 19 September 2025 06:42:44 +0000 (0:00:00.246) 0:01:01.110 ****** 2025-09-19 06:42:46.871740 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.871759 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.871771 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.871781 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.871792 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.871803 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.871813 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.871824 | orchestrator | 2025-09-19 06:42:46.871835 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-19 06:42:46.871846 | orchestrator | Friday 19 September 2025 06:42:46 +0000 (0:00:01.195) 0:01:02.305 ****** 2025-09-19 06:42:46.871857 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:46.871868 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:46.871879 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:46.871889 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:46.871900 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:46.871911 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:46.871922 | orchestrator | changed: [testbed-manager] 2025-09-19 06:42:46.871933 | orchestrator | 2025-09-19 06:42:46.871944 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-19 06:42:46.871955 | orchestrator | Friday 19 September 2025 06:42:46 +0000 (0:00:00.476) 0:01:02.781 ****** 2025-09-19 06:42:46.871966 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:46.871976 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:46.871987 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:46.871998 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:46.872009 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:46.872020 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:46.872030 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:46.872041 | orchestrator | 2025-09-19 06:42:46.872060 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-19 06:45:10.624913 | orchestrator | Friday 19 September 2025 06:42:46 +0000 (0:00:00.208) 0:01:02.990 ****** 2025-09-19 06:45:10.625008 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:10.625024 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:10.625037 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:10.625048 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:10.625059 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:10.625069 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:10.625080 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:10.625091 | orchestrator | 2025-09-19 06:45:10.625103 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-19 06:45:10.625114 | orchestrator | Friday 19 September 2025 06:42:47 +0000 (0:00:00.889) 0:01:03.880 ****** 2025-09-19 06:45:10.625125 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:10.625137 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:10.625148 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:10.625159 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:10.625170 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:10.625180 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:10.625191 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:10.625202 | orchestrator | 2025-09-19 06:45:10.625213 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-19 06:45:10.625246 | orchestrator | Friday 19 September 2025 06:42:49 +0000 (0:00:01.295) 0:01:05.175 ****** 2025-09-19 06:45:10.625258 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:10.625268 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:10.625279 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:10.625290 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:10.625301 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:10.625312 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:10.625323 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:10.625333 | orchestrator | 2025-09-19 06:45:10.625345 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-19 06:45:10.625356 | orchestrator | Friday 19 September 2025 06:42:51 +0000 (0:00:02.392) 0:01:07.568 ****** 2025-09-19 06:45:10.625367 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:10.625377 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:10.625388 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:10.625399 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:10.625410 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:10.625420 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:10.625431 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:10.625442 | orchestrator | 2025-09-19 06:45:10.625453 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-19 06:45:10.625465 | orchestrator | Friday 19 September 2025 06:43:30 +0000 (0:00:39.458) 0:01:47.026 ****** 2025-09-19 06:45:10.625478 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:10.625492 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:10.625504 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:10.625516 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:10.625528 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:10.625564 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:10.625576 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:10.625589 | orchestrator | 2025-09-19 06:45:10.625601 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-19 06:45:10.625614 | orchestrator | Friday 19 September 2025 06:44:51 +0000 (0:01:20.830) 0:03:07.856 ****** 2025-09-19 06:45:10.625628 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:10.625640 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:10.625653 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:10.625666 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:10.625678 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:10.625691 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:10.625703 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:10.625715 | orchestrator | 2025-09-19 06:45:10.625728 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-19 06:45:10.625752 | orchestrator | Friday 19 September 2025 06:44:53 +0000 (0:00:01.734) 0:03:09.590 ****** 2025-09-19 06:45:10.625766 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:10.625778 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:10.625790 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:10.625803 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:10.625815 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:10.625828 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:10.625839 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:10.625850 | orchestrator | 2025-09-19 06:45:10.625861 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-19 06:45:10.625872 | orchestrator | Friday 19 September 2025 06:45:04 +0000 (0:00:11.337) 0:03:20.927 ****** 2025-09-19 06:45:10.625891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-19 06:45:10.625908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-19 06:45:10.625951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-19 06:45:10.625965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-19 06:45:10.625977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-19 06:45:10.625988 | orchestrator | 2025-09-19 06:45:10.625999 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-19 06:45:10.626011 | orchestrator | Friday 19 September 2025 06:45:05 +0000 (0:00:00.434) 0:03:21.361 ****** 2025-09-19 06:45:10.626086 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 06:45:10.626098 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:10.626109 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 06:45:10.626120 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 06:45:10.626131 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:45:10.626142 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:45:10.626153 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 06:45:10.626164 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:45:10.626175 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 06:45:10.626185 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 06:45:10.626196 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 06:45:10.626207 | orchestrator | 2025-09-19 06:45:10.626218 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-19 06:45:10.626229 | orchestrator | Friday 19 September 2025 06:45:05 +0000 (0:00:00.629) 0:03:21.991 ****** 2025-09-19 06:45:10.626240 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 06:45:10.626251 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 06:45:10.626262 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 06:45:10.626273 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 06:45:10.626284 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 06:45:10.626295 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 06:45:10.626314 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 06:45:10.626325 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 06:45:10.626335 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 06:45:10.626346 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 06:45:10.626357 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:10.626368 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 06:45:10.626379 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 06:45:10.626390 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 06:45:10.626401 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 06:45:10.626412 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 06:45:10.626422 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 06:45:10.626433 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 06:45:10.626444 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 06:45:10.626455 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 06:45:10.626466 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 06:45:10.626485 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 06:45:13.844056 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 06:45:13.844162 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 06:45:13.844177 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 06:45:13.844191 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 06:45:13.844203 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:45:13.844216 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 06:45:13.844227 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 06:45:13.844238 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 06:45:13.844249 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 06:45:13.844261 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 06:45:13.844272 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:45:13.844303 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 06:45:13.844315 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 06:45:13.844326 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 06:45:13.844337 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 06:45:13.844348 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 06:45:13.844359 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 06:45:13.844392 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 06:45:13.844404 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 06:45:13.844415 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 06:45:13.844426 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 06:45:13.844438 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:45:13.844449 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 06:45:13.844460 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 06:45:13.844471 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 06:45:13.844487 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 06:45:13.844498 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 06:45:13.844510 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 06:45:13.844521 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 06:45:13.844581 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 06:45:13.844595 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 06:45:13.844608 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 06:45:13.844620 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 06:45:13.844633 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 06:45:13.844645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 06:45:13.844657 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 06:45:13.844671 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 06:45:13.844683 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 06:45:13.844694 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 06:45:13.844704 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 06:45:13.844715 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 06:45:13.844727 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 06:45:13.844738 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 06:45:13.844766 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 06:45:13.844778 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 06:45:13.844789 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 06:45:13.844800 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 06:45:13.844810 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 06:45:13.844821 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 06:45:13.844832 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 06:45:13.844851 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 06:45:13.844863 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 06:45:13.844874 | orchestrator | 2025-09-19 06:45:13.844885 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-19 06:45:13.844896 | orchestrator | Friday 19 September 2025 06:45:10 +0000 (0:00:04.755) 0:03:26.746 ****** 2025-09-19 06:45:13.844907 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:13.844918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:13.844929 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:13.844940 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:13.844950 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:13.844961 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:13.844976 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:13.844987 | orchestrator | 2025-09-19 06:45:13.844998 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-19 06:45:13.845009 | orchestrator | Friday 19 September 2025 06:45:11 +0000 (0:00:00.628) 0:03:27.375 ****** 2025-09-19 06:45:13.845020 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 06:45:13.845031 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:13.845042 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 06:45:13.845053 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:45:13.845064 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 06:45:13.845075 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 06:45:13.845086 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:45:13.845097 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:45:13.845114 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 06:45:13.845126 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 06:45:13.845137 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 06:45:13.845148 | orchestrator | 2025-09-19 06:45:13.845159 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-19 06:45:13.845170 | orchestrator | Friday 19 September 2025 06:45:11 +0000 (0:00:00.613) 0:03:27.989 ****** 2025-09-19 06:45:13.845181 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 06:45:13.845192 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 06:45:13.845203 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:13.845214 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 06:45:13.845225 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:45:13.845236 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:45:13.845247 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 06:45:13.845258 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:45:13.845269 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 06:45:13.845280 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 06:45:13.845298 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 06:45:13.845309 | orchestrator | 2025-09-19 06:45:13.845337 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-19 06:45:13.845348 | orchestrator | Friday 19 September 2025 06:45:13 +0000 (0:00:01.663) 0:03:29.652 ****** 2025-09-19 06:45:13.845359 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:13.845370 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:45:13.845381 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:45:13.845392 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:45:13.845403 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:45:13.845421 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:45:24.762979 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:45:24.763093 | orchestrator | 2025-09-19 06:45:24.763110 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-19 06:45:24.763123 | orchestrator | Friday 19 September 2025 06:45:13 +0000 (0:00:00.318) 0:03:29.971 ****** 2025-09-19 06:45:24.763135 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:24.763147 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:24.763158 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:24.763169 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:24.763180 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:24.763191 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:24.763202 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:24.763212 | orchestrator | 2025-09-19 06:45:24.763224 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-19 06:45:24.763235 | orchestrator | Friday 19 September 2025 06:45:19 +0000 (0:00:05.722) 0:03:35.693 ****** 2025-09-19 06:45:24.763246 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-19 06:45:24.763257 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:24.763268 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-19 06:45:24.763279 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-19 06:45:24.763290 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:45:24.763301 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-19 06:45:24.763312 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:45:24.763323 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-19 06:45:24.763333 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:45:24.763345 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-19 06:45:24.763355 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:45:24.763366 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:45:24.763378 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-19 06:45:24.763389 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:45:24.763400 | orchestrator | 2025-09-19 06:45:24.763411 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-19 06:45:24.763422 | orchestrator | Friday 19 September 2025 06:45:19 +0000 (0:00:00.222) 0:03:35.916 ****** 2025-09-19 06:45:24.763433 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-19 06:45:24.763444 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-19 06:45:24.763455 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-19 06:45:24.763465 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-19 06:45:24.763476 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-19 06:45:24.763487 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-19 06:45:24.763498 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-19 06:45:24.763509 | orchestrator | 2025-09-19 06:45:24.763523 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-19 06:45:24.763567 | orchestrator | Friday 19 September 2025 06:45:20 +0000 (0:00:00.931) 0:03:36.847 ****** 2025-09-19 06:45:24.763589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:45:24.763640 | orchestrator | 2025-09-19 06:45:24.763659 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-19 06:45:24.763675 | orchestrator | Friday 19 September 2025 06:45:21 +0000 (0:00:00.408) 0:03:37.255 ****** 2025-09-19 06:45:24.763691 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:24.763708 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:24.763743 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:24.763762 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:24.763779 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:24.763797 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:24.763816 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:24.763835 | orchestrator | 2025-09-19 06:45:24.763853 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-19 06:45:24.763871 | orchestrator | Friday 19 September 2025 06:45:22 +0000 (0:00:01.160) 0:03:38.416 ****** 2025-09-19 06:45:24.763890 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:24.763907 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:24.763926 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:24.763942 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:24.763953 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:24.763964 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:24.763974 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:24.763985 | orchestrator | 2025-09-19 06:45:24.763996 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-19 06:45:24.764007 | orchestrator | Friday 19 September 2025 06:45:22 +0000 (0:00:00.532) 0:03:38.948 ****** 2025-09-19 06:45:24.764018 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:24.764029 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:24.764039 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:24.764050 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:24.764060 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:24.764071 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:24.764082 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:24.764093 | orchestrator | 2025-09-19 06:45:24.764103 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-19 06:45:24.764114 | orchestrator | Friday 19 September 2025 06:45:23 +0000 (0:00:00.531) 0:03:39.480 ****** 2025-09-19 06:45:24.764125 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:24.764136 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:24.764146 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:24.764157 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:24.764168 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:24.764179 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:24.764190 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:24.764200 | orchestrator | 2025-09-19 06:45:24.764211 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-19 06:45:24.764222 | orchestrator | Friday 19 September 2025 06:45:23 +0000 (0:00:00.510) 0:03:39.991 ****** 2025-09-19 06:45:24.764256 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262943.4883015, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:24.764272 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262976.55066, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:24.764296 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262988.3721886, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:24.764309 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262973.9204311, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:24.764320 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262968.080895, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:24.764332 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262964.5071836, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:24.764352 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262979.085047, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:24.764382 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:49.127665 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:49.127831 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:49.127853 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:49.127890 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:49.127909 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:49.127926 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:49.127944 | orchestrator | 2025-09-19 06:45:49.127964 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-19 06:45:49.127983 | orchestrator | Friday 19 September 2025 06:45:24 +0000 (0:00:00.891) 0:03:40.882 ****** 2025-09-19 06:45:49.128001 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:49.128020 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:49.128037 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:49.128053 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:49.128070 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:49.128087 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:49.128103 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:49.128120 | orchestrator | 2025-09-19 06:45:49.128139 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-19 06:45:49.128156 | orchestrator | Friday 19 September 2025 06:45:25 +0000 (0:00:00.973) 0:03:41.855 ****** 2025-09-19 06:45:49.128184 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:49.128201 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:49.128219 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:49.128235 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:49.128272 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:49.128291 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:49.128309 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:49.128328 | orchestrator | 2025-09-19 06:45:49.128345 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-19 06:45:49.128365 | orchestrator | Friday 19 September 2025 06:45:26 +0000 (0:00:01.132) 0:03:42.988 ****** 2025-09-19 06:45:49.128384 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:49.128403 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:49.128420 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:49.128439 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:49.128457 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:49.128476 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:49.128494 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:49.128513 | orchestrator | 2025-09-19 06:45:49.128558 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-19 06:45:49.128577 | orchestrator | Friday 19 September 2025 06:45:27 +0000 (0:00:01.126) 0:03:44.115 ****** 2025-09-19 06:45:49.128594 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:49.128612 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:45:49.128630 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:45:49.128647 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:45:49.128666 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:45:49.128685 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:45:49.128703 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:45:49.128719 | orchestrator | 2025-09-19 06:45:49.128737 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-19 06:45:49.128753 | orchestrator | Friday 19 September 2025 06:45:28 +0000 (0:00:00.311) 0:03:44.426 ****** 2025-09-19 06:45:49.128770 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:49.128789 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:49.128805 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:49.128821 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:49.128837 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:49.128853 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:49.128868 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:49.128884 | orchestrator | 2025-09-19 06:45:49.128899 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-19 06:45:49.128915 | orchestrator | Friday 19 September 2025 06:45:29 +0000 (0:00:00.726) 0:03:45.152 ****** 2025-09-19 06:45:49.128935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:45:49.128954 | orchestrator | 2025-09-19 06:45:49.128970 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-19 06:45:49.128985 | orchestrator | Friday 19 September 2025 06:45:29 +0000 (0:00:00.406) 0:03:45.559 ****** 2025-09-19 06:45:49.129000 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:49.129016 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:49.129032 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:49.129047 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:49.129063 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:49.129078 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:49.129093 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:49.129108 | orchestrator | 2025-09-19 06:45:49.129133 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-19 06:45:49.129148 | orchestrator | Friday 19 September 2025 06:45:37 +0000 (0:00:08.163) 0:03:53.723 ****** 2025-09-19 06:45:49.129163 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:49.129187 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:49.129202 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:49.129217 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:49.129231 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:49.129246 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:49.129260 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:49.129275 | orchestrator | 2025-09-19 06:45:49.129290 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-19 06:45:49.129305 | orchestrator | Friday 19 September 2025 06:45:38 +0000 (0:00:01.322) 0:03:55.045 ****** 2025-09-19 06:45:49.129321 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:49.129335 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:49.129350 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:49.129365 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:49.129379 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:49.129394 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:49.129409 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:49.129423 | orchestrator | 2025-09-19 06:45:49.129438 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-19 06:45:49.129453 | orchestrator | Friday 19 September 2025 06:45:39 +0000 (0:00:00.985) 0:03:56.030 ****** 2025-09-19 06:45:49.129468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:45:49.129483 | orchestrator | 2025-09-19 06:45:49.129499 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-19 06:45:49.129513 | orchestrator | Friday 19 September 2025 06:45:40 +0000 (0:00:00.421) 0:03:56.452 ****** 2025-09-19 06:45:49.129550 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:49.129565 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:49.129580 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:49.129595 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:49.129610 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:49.129625 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:49.129639 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:49.129654 | orchestrator | 2025-09-19 06:45:49.129669 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-19 06:45:49.129684 | orchestrator | Friday 19 September 2025 06:45:48 +0000 (0:00:08.260) 0:04:04.712 ****** 2025-09-19 06:45:49.129700 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:49.129714 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:49.129729 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:49.129757 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:57.927423 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:57.927624 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:57.927643 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:57.927656 | orchestrator | 2025-09-19 06:46:57.927669 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-19 06:46:57.927681 | orchestrator | Friday 19 September 2025 06:45:49 +0000 (0:00:00.539) 0:04:05.252 ****** 2025-09-19 06:46:57.927693 | orchestrator | changed: [testbed-manager] 2025-09-19 06:46:57.927704 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:46:57.927715 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:57.927726 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:57.927737 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:57.927748 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:46:57.927759 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:57.927770 | orchestrator | 2025-09-19 06:46:57.927781 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-19 06:46:57.927792 | orchestrator | Friday 19 September 2025 06:45:50 +0000 (0:00:01.027) 0:04:06.280 ****** 2025-09-19 06:46:57.927803 | orchestrator | changed: [testbed-manager] 2025-09-19 06:46:57.927814 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:46:57.927851 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:57.927862 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:46:57.927873 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:57.927884 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:57.927895 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:57.927906 | orchestrator | 2025-09-19 06:46:57.927916 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-19 06:46:57.927927 | orchestrator | Friday 19 September 2025 06:45:51 +0000 (0:00:00.912) 0:04:07.192 ****** 2025-09-19 06:46:57.927938 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:57.927950 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:57.927961 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:46:57.927972 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:46:57.927983 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:57.927993 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:57.928004 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:57.928015 | orchestrator | 2025-09-19 06:46:57.928026 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-19 06:46:57.928037 | orchestrator | Friday 19 September 2025 06:45:51 +0000 (0:00:00.214) 0:04:07.407 ****** 2025-09-19 06:46:57.928048 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:57.928059 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:57.928069 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:46:57.928080 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:46:57.928091 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:57.928101 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:57.928113 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:57.928124 | orchestrator | 2025-09-19 06:46:57.928135 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-19 06:46:57.928146 | orchestrator | Friday 19 September 2025 06:45:51 +0000 (0:00:00.264) 0:04:07.671 ****** 2025-09-19 06:46:57.928157 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:57.928168 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:57.928178 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:46:57.928189 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:46:57.928200 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:57.928210 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:57.928221 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:57.928232 | orchestrator | 2025-09-19 06:46:57.928272 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-19 06:46:57.928295 | orchestrator | Friday 19 September 2025 06:45:51 +0000 (0:00:00.245) 0:04:07.916 ****** 2025-09-19 06:46:57.928306 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:57.928317 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:57.928327 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:57.928338 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:57.928349 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:46:57.928359 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:57.928370 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:46:57.928381 | orchestrator | 2025-09-19 06:46:57.928391 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-19 06:46:57.928402 | orchestrator | Friday 19 September 2025 06:45:57 +0000 (0:00:05.714) 0:04:13.631 ****** 2025-09-19 06:46:57.928414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:46:57.928428 | orchestrator | 2025-09-19 06:46:57.928440 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-19 06:46:57.928451 | orchestrator | Friday 19 September 2025 06:45:57 +0000 (0:00:00.344) 0:04:13.976 ****** 2025-09-19 06:46:57.928462 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-19 06:46:57.928472 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-19 06:46:57.928484 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-19 06:46:57.928503 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-19 06:46:57.928536 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:46:57.928547 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:46:57.928558 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-19 06:46:57.928569 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-19 06:46:57.928580 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:46:57.928591 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-19 06:46:57.928602 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-19 06:46:57.928613 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-19 06:46:57.928623 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-19 06:46:57.928634 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:46:57.928645 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:46:57.928655 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-19 06:46:57.928666 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-19 06:46:57.928697 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:46:57.928708 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-19 06:46:57.928719 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-19 06:46:57.928730 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:46:57.928740 | orchestrator | 2025-09-19 06:46:57.928751 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-19 06:46:57.928762 | orchestrator | Friday 19 September 2025 06:45:58 +0000 (0:00:00.272) 0:04:14.248 ****** 2025-09-19 06:46:57.928774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:46:57.928785 | orchestrator | 2025-09-19 06:46:57.928796 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-19 06:46:57.928806 | orchestrator | Friday 19 September 2025 06:45:58 +0000 (0:00:00.324) 0:04:14.573 ****** 2025-09-19 06:46:57.928817 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-19 06:46:57.928828 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:46:57.928839 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-19 06:46:57.928849 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-19 06:46:57.928860 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:46:57.928871 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-19 06:46:57.928881 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:46:57.928892 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-19 06:46:57.928903 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:46:57.928913 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-19 06:46:57.928924 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:46:57.928935 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:46:57.928945 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-19 06:46:57.928956 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:46:57.928967 | orchestrator | 2025-09-19 06:46:57.928978 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-19 06:46:57.928989 | orchestrator | Friday 19 September 2025 06:45:58 +0000 (0:00:00.303) 0:04:14.877 ****** 2025-09-19 06:46:57.928999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:46:57.929011 | orchestrator | 2025-09-19 06:46:57.929021 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-19 06:46:57.929039 | orchestrator | Friday 19 September 2025 06:45:59 +0000 (0:00:00.531) 0:04:15.408 ****** 2025-09-19 06:46:57.929050 | orchestrator | changed: [testbed-manager] 2025-09-19 06:46:57.929061 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:46:57.929071 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:57.929082 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:57.929093 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:57.929103 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:57.929114 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:46:57.929125 | orchestrator | 2025-09-19 06:46:57.929136 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-19 06:46:57.929146 | orchestrator | Friday 19 September 2025 06:46:33 +0000 (0:00:34.189) 0:04:49.598 ****** 2025-09-19 06:46:57.929157 | orchestrator | changed: [testbed-manager] 2025-09-19 06:46:57.929168 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:46:57.929178 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:57.929189 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:57.929200 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:57.929210 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:57.929221 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:46:57.929232 | orchestrator | 2025-09-19 06:46:57.929242 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-19 06:46:57.929253 | orchestrator | Friday 19 September 2025 06:46:41 +0000 (0:00:08.511) 0:04:58.109 ****** 2025-09-19 06:46:57.929264 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:46:57.929274 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:57.929285 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:57.929295 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:57.929306 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:57.929317 | orchestrator | changed: [testbed-manager] 2025-09-19 06:46:57.929327 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:46:57.929338 | orchestrator | 2025-09-19 06:46:57.929348 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-19 06:46:57.929359 | orchestrator | Friday 19 September 2025 06:46:49 +0000 (0:00:07.782) 0:05:05.891 ****** 2025-09-19 06:46:57.929370 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:57.929381 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:46:57.929391 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:57.929402 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:46:57.929413 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:57.929424 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:57.929434 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:57.929445 | orchestrator | 2025-09-19 06:46:57.929456 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-19 06:46:57.929467 | orchestrator | Friday 19 September 2025 06:46:51 +0000 (0:00:01.791) 0:05:07.683 ****** 2025-09-19 06:46:57.929485 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:46:57.929497 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:57.929508 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:57.929571 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:46:57.929582 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:57.929593 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:57.929604 | orchestrator | changed: [testbed-manager] 2025-09-19 06:46:57.929615 | orchestrator | 2025-09-19 06:46:57.929626 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-19 06:46:57.929644 | orchestrator | Friday 19 September 2025 06:46:57 +0000 (0:00:06.355) 0:05:14.039 ****** 2025-09-19 06:47:08.201873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:47:08.202004 | orchestrator | 2025-09-19 06:47:08.202087 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-19 06:47:08.202124 | orchestrator | Friday 19 September 2025 06:46:58 +0000 (0:00:00.424) 0:05:14.463 ****** 2025-09-19 06:47:08.202137 | orchestrator | changed: [testbed-manager] 2025-09-19 06:47:08.202149 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:08.202160 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:08.202171 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:08.202182 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:08.202193 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:08.202204 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:08.202214 | orchestrator | 2025-09-19 06:47:08.202226 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-19 06:47:08.202237 | orchestrator | Friday 19 September 2025 06:46:59 +0000 (0:00:00.757) 0:05:15.221 ****** 2025-09-19 06:47:08.202248 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:08.202260 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:08.202270 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:08.202281 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:08.202292 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:08.202303 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:08.202313 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:08.202324 | orchestrator | 2025-09-19 06:47:08.202335 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-19 06:47:08.202346 | orchestrator | Friday 19 September 2025 06:47:00 +0000 (0:00:01.646) 0:05:16.867 ****** 2025-09-19 06:47:08.202357 | orchestrator | changed: [testbed-manager] 2025-09-19 06:47:08.202368 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:08.202379 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:08.202389 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:08.202400 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:08.202413 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:08.202425 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:08.202438 | orchestrator | 2025-09-19 06:47:08.202451 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-19 06:47:08.202464 | orchestrator | Friday 19 September 2025 06:47:01 +0000 (0:00:00.739) 0:05:17.607 ****** 2025-09-19 06:47:08.202476 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:08.202489 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:08.202502 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:08.202547 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:08.202561 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:08.202575 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:08.202588 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:08.202601 | orchestrator | 2025-09-19 06:47:08.202613 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-19 06:47:08.202624 | orchestrator | Friday 19 September 2025 06:47:01 +0000 (0:00:00.243) 0:05:17.850 ****** 2025-09-19 06:47:08.202634 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:08.202645 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:08.202670 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:08.202681 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:08.202692 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:08.202703 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:08.202713 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:08.202724 | orchestrator | 2025-09-19 06:47:08.202735 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-19 06:47:08.202746 | orchestrator | Friday 19 September 2025 06:47:02 +0000 (0:00:00.319) 0:05:18.170 ****** 2025-09-19 06:47:08.202756 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:08.202767 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:08.202778 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:08.202789 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:08.202800 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:08.202811 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:08.202822 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:08.202840 | orchestrator | 2025-09-19 06:47:08.202851 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-19 06:47:08.202862 | orchestrator | Friday 19 September 2025 06:47:02 +0000 (0:00:00.251) 0:05:18.422 ****** 2025-09-19 06:47:08.202873 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:08.202884 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:08.202894 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:08.202905 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:08.202915 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:08.202926 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:08.202937 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:08.202948 | orchestrator | 2025-09-19 06:47:08.202958 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-19 06:47:08.202971 | orchestrator | Friday 19 September 2025 06:47:02 +0000 (0:00:00.230) 0:05:18.653 ****** 2025-09-19 06:47:08.202982 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:08.202992 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:08.203003 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:08.203014 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:08.203025 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:08.203035 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:08.203046 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:08.203057 | orchestrator | 2025-09-19 06:47:08.203068 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-19 06:47:08.203079 | orchestrator | Friday 19 September 2025 06:47:02 +0000 (0:00:00.248) 0:05:18.902 ****** 2025-09-19 06:47:08.203090 | orchestrator | ok: [testbed-manager] =>  2025-09-19 06:47:08.203101 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:08.203111 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 06:47:08.203122 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:08.203133 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 06:47:08.203143 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:08.203154 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 06:47:08.203165 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:08.203176 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 06:47:08.203186 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:08.203214 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 06:47:08.203225 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:08.203236 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 06:47:08.203247 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:08.203258 | orchestrator | 2025-09-19 06:47:08.203269 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-19 06:47:08.203279 | orchestrator | Friday 19 September 2025 06:47:02 +0000 (0:00:00.219) 0:05:19.121 ****** 2025-09-19 06:47:08.203290 | orchestrator | ok: [testbed-manager] =>  2025-09-19 06:47:08.203301 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:08.203312 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 06:47:08.203323 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:08.203333 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 06:47:08.203344 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:08.203355 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 06:47:08.203365 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:08.203376 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 06:47:08.203387 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:08.203397 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 06:47:08.203408 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:08.203419 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 06:47:08.203430 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:08.203440 | orchestrator | 2025-09-19 06:47:08.203451 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-19 06:47:08.203462 | orchestrator | Friday 19 September 2025 06:47:03 +0000 (0:00:00.325) 0:05:19.446 ****** 2025-09-19 06:47:08.203473 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:08.203490 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:08.203501 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:08.203538 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:08.203550 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:08.203561 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:08.203572 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:08.203583 | orchestrator | 2025-09-19 06:47:08.203594 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-19 06:47:08.203605 | orchestrator | Friday 19 September 2025 06:47:03 +0000 (0:00:00.259) 0:05:19.705 ****** 2025-09-19 06:47:08.203616 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:08.203627 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:08.203638 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:08.203648 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:08.203659 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:08.203670 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:08.203681 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:08.203691 | orchestrator | 2025-09-19 06:47:08.203702 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-19 06:47:08.203713 | orchestrator | Friday 19 September 2025 06:47:03 +0000 (0:00:00.243) 0:05:19.948 ****** 2025-09-19 06:47:08.203726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:47:08.203739 | orchestrator | 2025-09-19 06:47:08.203756 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-19 06:47:08.203767 | orchestrator | Friday 19 September 2025 06:47:04 +0000 (0:00:00.348) 0:05:20.297 ****** 2025-09-19 06:47:08.203778 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:08.203789 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:08.203800 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:08.203811 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:08.203821 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:08.203832 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:08.203843 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:08.203854 | orchestrator | 2025-09-19 06:47:08.203865 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-19 06:47:08.203876 | orchestrator | Friday 19 September 2025 06:47:04 +0000 (0:00:00.807) 0:05:21.104 ****** 2025-09-19 06:47:08.203887 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:08.203898 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:08.203908 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:08.203919 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:08.203930 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:08.203941 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:08.203952 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:08.203963 | orchestrator | 2025-09-19 06:47:08.203974 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-19 06:47:08.203986 | orchestrator | Friday 19 September 2025 06:47:07 +0000 (0:00:02.717) 0:05:23.822 ****** 2025-09-19 06:47:08.203997 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-19 06:47:08.204008 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-19 06:47:08.204019 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-19 06:47:08.204030 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:08.204041 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-19 06:47:08.204052 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-19 06:47:08.204063 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-19 06:47:08.204073 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-19 06:47:08.204084 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-19 06:47:08.204095 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:08.204113 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-19 06:47:08.204124 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-19 06:47:08.204135 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-19 06:47:08.204145 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-19 06:47:08.204156 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:08.204167 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-19 06:47:08.204178 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-19 06:47:08.204196 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-19 06:48:06.793506 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:06.793637 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-19 06:48:06.793652 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-19 06:48:06.793662 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:06.793671 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-19 06:48:06.793680 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:06.793689 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-19 06:48:06.793698 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-19 06:48:06.793707 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-19 06:48:06.793716 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:06.793725 | orchestrator | 2025-09-19 06:48:06.793735 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-19 06:48:06.793745 | orchestrator | Friday 19 September 2025 06:47:08 +0000 (0:00:00.730) 0:05:24.552 ****** 2025-09-19 06:48:06.793754 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:06.793763 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.793772 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.793780 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.793789 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.793798 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.793807 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.793815 | orchestrator | 2025-09-19 06:48:06.793824 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-19 06:48:06.793833 | orchestrator | Friday 19 September 2025 06:47:14 +0000 (0:00:05.857) 0:05:30.410 ****** 2025-09-19 06:48:06.793842 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:06.793851 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.793859 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.793868 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.793877 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.793885 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.793894 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.793903 | orchestrator | 2025-09-19 06:48:06.793912 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-19 06:48:06.793920 | orchestrator | Friday 19 September 2025 06:47:15 +0000 (0:00:01.056) 0:05:31.466 ****** 2025-09-19 06:48:06.793929 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:06.793938 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.793946 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.793955 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.793964 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.793972 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.793981 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.793989 | orchestrator | 2025-09-19 06:48:06.793998 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-19 06:48:06.794007 | orchestrator | Friday 19 September 2025 06:47:23 +0000 (0:00:07.853) 0:05:39.320 ****** 2025-09-19 06:48:06.794064 | orchestrator | changed: [testbed-manager] 2025-09-19 06:48:06.794075 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.794085 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.794116 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.794126 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.794179 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.794190 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.794201 | orchestrator | 2025-09-19 06:48:06.794211 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-19 06:48:06.794222 | orchestrator | Friday 19 September 2025 06:47:26 +0000 (0:00:03.229) 0:05:42.549 ****** 2025-09-19 06:48:06.794232 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:06.794243 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.794252 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.794263 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.794273 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.794283 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.794293 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.794304 | orchestrator | 2025-09-19 06:48:06.794315 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-19 06:48:06.794325 | orchestrator | Friday 19 September 2025 06:47:27 +0000 (0:00:01.500) 0:05:44.050 ****** 2025-09-19 06:48:06.794336 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:06.794346 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.794357 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.794366 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.794377 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.794387 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.794397 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.794407 | orchestrator | 2025-09-19 06:48:06.794417 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-19 06:48:06.794426 | orchestrator | Friday 19 September 2025 06:47:29 +0000 (0:00:01.345) 0:05:45.396 ****** 2025-09-19 06:48:06.794435 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:06.794443 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:06.794452 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:06.794460 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:06.794469 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:06.794478 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:06.794486 | orchestrator | changed: [testbed-manager] 2025-09-19 06:48:06.794495 | orchestrator | 2025-09-19 06:48:06.794503 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-19 06:48:06.794512 | orchestrator | Friday 19 September 2025 06:47:29 +0000 (0:00:00.696) 0:05:46.093 ****** 2025-09-19 06:48:06.794536 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:06.794545 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.794554 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.794563 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.794571 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.794580 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.794589 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.794597 | orchestrator | 2025-09-19 06:48:06.794606 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-19 06:48:06.794615 | orchestrator | Friday 19 September 2025 06:47:39 +0000 (0:00:09.854) 0:05:55.948 ****** 2025-09-19 06:48:06.794623 | orchestrator | changed: [testbed-manager] 2025-09-19 06:48:06.794647 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.794656 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.794665 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.794674 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.794682 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.794691 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.794700 | orchestrator | 2025-09-19 06:48:06.794708 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-19 06:48:06.794717 | orchestrator | Friday 19 September 2025 06:47:40 +0000 (0:00:01.092) 0:05:57.040 ****** 2025-09-19 06:48:06.794734 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:06.794742 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.794751 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.794760 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.794768 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.794777 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.794785 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.794794 | orchestrator | 2025-09-19 06:48:06.794803 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-19 06:48:06.794811 | orchestrator | Friday 19 September 2025 06:47:49 +0000 (0:00:08.731) 0:06:05.772 ****** 2025-09-19 06:48:06.794820 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:06.794829 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.794837 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.794846 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.794854 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.794863 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.794871 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.794880 | orchestrator | 2025-09-19 06:48:06.794889 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-19 06:48:06.794897 | orchestrator | Friday 19 September 2025 06:48:00 +0000 (0:00:10.769) 0:06:16.542 ****** 2025-09-19 06:48:06.794906 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-19 06:48:06.794915 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-19 06:48:06.794924 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-19 06:48:06.794933 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-19 06:48:06.794941 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-19 06:48:06.794950 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-19 06:48:06.794958 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-19 06:48:06.794967 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-19 06:48:06.794976 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-19 06:48:06.794984 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-19 06:48:06.794993 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-19 06:48:06.795001 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-19 06:48:06.795010 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-19 06:48:06.795019 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-19 06:48:06.795027 | orchestrator | 2025-09-19 06:48:06.795036 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-19 06:48:06.795045 | orchestrator | Friday 19 September 2025 06:48:01 +0000 (0:00:01.331) 0:06:17.873 ****** 2025-09-19 06:48:06.795054 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:06.795063 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:06.795071 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:06.795080 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:06.795089 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:06.795097 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:06.795106 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:06.795115 | orchestrator | 2025-09-19 06:48:06.795123 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-19 06:48:06.795132 | orchestrator | Friday 19 September 2025 06:48:02 +0000 (0:00:00.514) 0:06:18.387 ****** 2025-09-19 06:48:06.795141 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:06.795150 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:06.795158 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:06.795167 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:06.795175 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:06.795184 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:06.795192 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:06.795201 | orchestrator | 2025-09-19 06:48:06.795210 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-19 06:48:06.795225 | orchestrator | Friday 19 September 2025 06:48:05 +0000 (0:00:03.721) 0:06:22.109 ****** 2025-09-19 06:48:06.795233 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:06.795242 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:06.795250 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:06.795259 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:06.795268 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:06.795276 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:06.795285 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:06.795293 | orchestrator | 2025-09-19 06:48:06.795303 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-19 06:48:06.795312 | orchestrator | Friday 19 September 2025 06:48:06 +0000 (0:00:00.522) 0:06:22.631 ****** 2025-09-19 06:48:06.795321 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-19 06:48:06.795330 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-19 06:48:06.795339 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:06.795347 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-19 06:48:06.795356 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-19 06:48:06.795365 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:06.795373 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-19 06:48:06.795382 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-19 06:48:06.795391 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:06.795400 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-19 06:48:06.795414 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-19 06:48:24.349262 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:24.349376 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-19 06:48:24.349391 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-19 06:48:24.349402 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:24.349413 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-19 06:48:24.349423 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-19 06:48:24.349433 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:24.349443 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-19 06:48:24.349453 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-19 06:48:24.349462 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:24.349473 | orchestrator | 2025-09-19 06:48:24.349485 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-19 06:48:24.349495 | orchestrator | Friday 19 September 2025 06:48:07 +0000 (0:00:00.551) 0:06:23.183 ****** 2025-09-19 06:48:24.349505 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:24.349515 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:24.349580 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:24.349592 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:24.349602 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:24.349612 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:24.349621 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:24.349631 | orchestrator | 2025-09-19 06:48:24.349641 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-19 06:48:24.349651 | orchestrator | Friday 19 September 2025 06:48:07 +0000 (0:00:00.504) 0:06:23.688 ****** 2025-09-19 06:48:24.349661 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:24.349670 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:24.349680 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:24.349690 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:24.349699 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:24.349709 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:24.349741 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:24.349751 | orchestrator | 2025-09-19 06:48:24.349761 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-19 06:48:24.349770 | orchestrator | Friday 19 September 2025 06:48:08 +0000 (0:00:00.472) 0:06:24.160 ****** 2025-09-19 06:48:24.349780 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:24.349790 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:24.349799 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:24.349810 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:24.349821 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:24.349832 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:24.349843 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:24.349854 | orchestrator | 2025-09-19 06:48:24.349910 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-19 06:48:24.349922 | orchestrator | Friday 19 September 2025 06:48:08 +0000 (0:00:00.577) 0:06:24.737 ****** 2025-09-19 06:48:24.349934 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:24.349945 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:24.349956 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:24.349971 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:24.349982 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:24.349993 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:24.350004 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:24.350070 | orchestrator | 2025-09-19 06:48:24.350083 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-19 06:48:24.350094 | orchestrator | Friday 19 September 2025 06:48:10 +0000 (0:00:01.611) 0:06:26.349 ****** 2025-09-19 06:48:24.350107 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:48:24.350120 | orchestrator | 2025-09-19 06:48:24.350131 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-19 06:48:24.350143 | orchestrator | Friday 19 September 2025 06:48:10 +0000 (0:00:00.743) 0:06:27.092 ****** 2025-09-19 06:48:24.350154 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:24.350165 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:24.350176 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:24.350192 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:24.350209 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:24.350226 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:24.350241 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:24.350257 | orchestrator | 2025-09-19 06:48:24.350273 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-19 06:48:24.350290 | orchestrator | Friday 19 September 2025 06:48:11 +0000 (0:00:00.762) 0:06:27.855 ****** 2025-09-19 06:48:24.350307 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:24.350322 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:24.350332 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:24.350342 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:24.350352 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:24.350361 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:24.350371 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:24.350380 | orchestrator | 2025-09-19 06:48:24.350390 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-19 06:48:24.350400 | orchestrator | Friday 19 September 2025 06:48:12 +0000 (0:00:00.902) 0:06:28.757 ****** 2025-09-19 06:48:24.350409 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:24.350419 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:24.350429 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:24.350438 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:24.350448 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:24.350457 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:24.350467 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:24.350487 | orchestrator | 2025-09-19 06:48:24.350497 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-19 06:48:24.350507 | orchestrator | Friday 19 September 2025 06:48:13 +0000 (0:00:01.244) 0:06:30.002 ****** 2025-09-19 06:48:24.350555 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:24.350566 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:24.350576 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:24.350585 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:24.350595 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:24.350605 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:24.350614 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:24.350624 | orchestrator | 2025-09-19 06:48:24.350634 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-19 06:48:24.350644 | orchestrator | Friday 19 September 2025 06:48:15 +0000 (0:00:01.287) 0:06:31.290 ****** 2025-09-19 06:48:24.350654 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:24.350663 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:24.350673 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:24.350683 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:24.350692 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:24.350702 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:24.350712 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:24.350721 | orchestrator | 2025-09-19 06:48:24.350731 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-19 06:48:24.350741 | orchestrator | Friday 19 September 2025 06:48:16 +0000 (0:00:01.245) 0:06:32.535 ****** 2025-09-19 06:48:24.350750 | orchestrator | changed: [testbed-manager] 2025-09-19 06:48:24.350760 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:24.350770 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:24.350779 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:24.350789 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:24.350799 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:24.350808 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:24.350818 | orchestrator | 2025-09-19 06:48:24.350828 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-19 06:48:24.350838 | orchestrator | Friday 19 September 2025 06:48:17 +0000 (0:00:01.478) 0:06:34.014 ****** 2025-09-19 06:48:24.350848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:48:24.350858 | orchestrator | 2025-09-19 06:48:24.350867 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-19 06:48:24.350877 | orchestrator | Friday 19 September 2025 06:48:18 +0000 (0:00:00.826) 0:06:34.841 ****** 2025-09-19 06:48:24.350887 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:24.350897 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:24.350906 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:24.350916 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:24.350926 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:24.350935 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:24.350945 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:24.350955 | orchestrator | 2025-09-19 06:48:24.350965 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-19 06:48:24.350974 | orchestrator | Friday 19 September 2025 06:48:19 +0000 (0:00:01.254) 0:06:36.095 ****** 2025-09-19 06:48:24.350984 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:24.350994 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:24.351004 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:24.351013 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:24.351023 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:24.351033 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:24.351043 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:24.351052 | orchestrator | 2025-09-19 06:48:24.351062 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-19 06:48:24.351078 | orchestrator | Friday 19 September 2025 06:48:21 +0000 (0:00:01.045) 0:06:37.141 ****** 2025-09-19 06:48:24.351089 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:24.351098 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:24.351108 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:24.351117 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:24.351127 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:24.351136 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:24.351146 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:24.351156 | orchestrator | 2025-09-19 06:48:24.351165 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-19 06:48:24.351175 | orchestrator | Friday 19 September 2025 06:48:22 +0000 (0:00:01.170) 0:06:38.311 ****** 2025-09-19 06:48:24.351185 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:24.351194 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:24.351204 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:24.351214 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:24.351223 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:24.351232 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:24.351242 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:24.351251 | orchestrator | 2025-09-19 06:48:24.351261 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-19 06:48:24.351271 | orchestrator | Friday 19 September 2025 06:48:23 +0000 (0:00:01.034) 0:06:39.346 ****** 2025-09-19 06:48:24.351281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:48:24.351291 | orchestrator | 2025-09-19 06:48:24.351301 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:24.351310 | orchestrator | Friday 19 September 2025 06:48:24 +0000 (0:00:00.817) 0:06:40.163 ****** 2025-09-19 06:48:24.351320 | orchestrator | 2025-09-19 06:48:24.351330 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:24.351339 | orchestrator | Friday 19 September 2025 06:48:24 +0000 (0:00:00.038) 0:06:40.202 ****** 2025-09-19 06:48:24.351349 | orchestrator | 2025-09-19 06:48:24.351359 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:24.351368 | orchestrator | Friday 19 September 2025 06:48:24 +0000 (0:00:00.045) 0:06:40.248 ****** 2025-09-19 06:48:24.351378 | orchestrator | 2025-09-19 06:48:24.351387 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:24.351397 | orchestrator | Friday 19 September 2025 06:48:24 +0000 (0:00:00.039) 0:06:40.288 ****** 2025-09-19 06:48:24.351406 | orchestrator | 2025-09-19 06:48:24.351422 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:49.426506 | orchestrator | Friday 19 September 2025 06:48:24 +0000 (0:00:00.038) 0:06:40.327 ****** 2025-09-19 06:48:49.426659 | orchestrator | 2025-09-19 06:48:49.426673 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:49.426683 | orchestrator | Friday 19 September 2025 06:48:24 +0000 (0:00:00.054) 0:06:40.382 ****** 2025-09-19 06:48:49.426692 | orchestrator | 2025-09-19 06:48:49.426701 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:49.426710 | orchestrator | Friday 19 September 2025 06:48:24 +0000 (0:00:00.040) 0:06:40.423 ****** 2025-09-19 06:48:49.426719 | orchestrator | 2025-09-19 06:48:49.426729 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 06:48:49.426738 | orchestrator | Friday 19 September 2025 06:48:24 +0000 (0:00:00.040) 0:06:40.463 ****** 2025-09-19 06:48:49.426747 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:49.426757 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:49.426766 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:49.426775 | orchestrator | 2025-09-19 06:48:49.426783 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-19 06:48:49.426818 | orchestrator | Friday 19 September 2025 06:48:25 +0000 (0:00:01.311) 0:06:41.775 ****** 2025-09-19 06:48:49.426828 | orchestrator | changed: [testbed-manager] 2025-09-19 06:48:49.426837 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:49.426846 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:49.426854 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:49.426863 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:49.426871 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:49.426880 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:49.426889 | orchestrator | 2025-09-19 06:48:49.426897 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-19 06:48:49.426906 | orchestrator | Friday 19 September 2025 06:48:26 +0000 (0:00:01.309) 0:06:43.084 ****** 2025-09-19 06:48:49.426915 | orchestrator | changed: [testbed-manager] 2025-09-19 06:48:49.426923 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:49.426932 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:49.426940 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:49.426949 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:49.426958 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:49.426966 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:49.426975 | orchestrator | 2025-09-19 06:48:49.426984 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-19 06:48:49.426992 | orchestrator | Friday 19 September 2025 06:48:28 +0000 (0:00:01.142) 0:06:44.227 ****** 2025-09-19 06:48:49.427001 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:49.427009 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:49.427018 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:49.427027 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:49.427036 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:49.427045 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:49.427053 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:49.427064 | orchestrator | 2025-09-19 06:48:49.427074 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-19 06:48:49.427096 | orchestrator | Friday 19 September 2025 06:48:30 +0000 (0:00:02.308) 0:06:46.536 ****** 2025-09-19 06:48:49.427106 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:49.427116 | orchestrator | 2025-09-19 06:48:49.427127 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-19 06:48:49.427136 | orchestrator | Friday 19 September 2025 06:48:30 +0000 (0:00:00.111) 0:06:46.648 ****** 2025-09-19 06:48:49.427147 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:49.427157 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:49.427167 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:49.427176 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:49.427186 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:49.427196 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:49.427205 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:49.427216 | orchestrator | 2025-09-19 06:48:49.427226 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-19 06:48:49.427237 | orchestrator | Friday 19 September 2025 06:48:31 +0000 (0:00:00.997) 0:06:47.645 ****** 2025-09-19 06:48:49.427246 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:49.427256 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:49.427265 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:49.427275 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:49.427285 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:49.427295 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:49.427304 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:49.427314 | orchestrator | 2025-09-19 06:48:49.427324 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-19 06:48:49.427334 | orchestrator | Friday 19 September 2025 06:48:32 +0000 (0:00:00.727) 0:06:48.373 ****** 2025-09-19 06:48:49.427345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:48:49.427364 | orchestrator | 2025-09-19 06:48:49.427374 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-19 06:48:49.427384 | orchestrator | Friday 19 September 2025 06:48:33 +0000 (0:00:00.871) 0:06:49.244 ****** 2025-09-19 06:48:49.427394 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:49.427404 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:49.427413 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:49.427424 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:49.427433 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:49.427443 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:49.427451 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:49.427460 | orchestrator | 2025-09-19 06:48:49.427469 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-19 06:48:49.427478 | orchestrator | Friday 19 September 2025 06:48:33 +0000 (0:00:00.768) 0:06:50.013 ****** 2025-09-19 06:48:49.427487 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-19 06:48:49.427496 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-19 06:48:49.427518 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-19 06:48:49.427551 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-19 06:48:49.427560 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-19 06:48:49.427569 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-19 06:48:49.427578 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-19 06:48:49.427587 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-19 06:48:49.427595 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-19 06:48:49.427605 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-19 06:48:49.427613 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-19 06:48:49.427622 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-19 06:48:49.427631 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-19 06:48:49.427639 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-19 06:48:49.427648 | orchestrator | 2025-09-19 06:48:49.427657 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-19 06:48:49.427666 | orchestrator | Friday 19 September 2025 06:48:36 +0000 (0:00:02.541) 0:06:52.554 ****** 2025-09-19 06:48:49.427675 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:49.427683 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:49.427692 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:49.427701 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:49.427709 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:49.427718 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:49.427727 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:49.427735 | orchestrator | 2025-09-19 06:48:49.427744 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-19 06:48:49.427753 | orchestrator | Friday 19 September 2025 06:48:36 +0000 (0:00:00.427) 0:06:52.982 ****** 2025-09-19 06:48:49.427763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:48:49.427773 | orchestrator | 2025-09-19 06:48:49.427783 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-19 06:48:49.427791 | orchestrator | Friday 19 September 2025 06:48:37 +0000 (0:00:00.708) 0:06:53.691 ****** 2025-09-19 06:48:49.427800 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:49.427809 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:49.427818 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:49.427833 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:49.427842 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:49.427850 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:49.427859 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:49.427868 | orchestrator | 2025-09-19 06:48:49.427877 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-19 06:48:49.427890 | orchestrator | Friday 19 September 2025 06:48:38 +0000 (0:00:00.967) 0:06:54.659 ****** 2025-09-19 06:48:49.427899 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:49.427908 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:49.427917 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:49.427925 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:49.427934 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:49.427943 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:49.427951 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:49.427960 | orchestrator | 2025-09-19 06:48:49.427969 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-19 06:48:49.427978 | orchestrator | Friday 19 September 2025 06:48:39 +0000 (0:00:00.817) 0:06:55.476 ****** 2025-09-19 06:48:49.427986 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:49.427995 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:49.428004 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:49.428013 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:49.428021 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:49.428030 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:49.428039 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:49.428048 | orchestrator | 2025-09-19 06:48:49.428056 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-19 06:48:49.428065 | orchestrator | Friday 19 September 2025 06:48:39 +0000 (0:00:00.523) 0:06:56.000 ****** 2025-09-19 06:48:49.428074 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:49.428083 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:49.428092 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:49.428100 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:49.428109 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:49.428118 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:49.428126 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:49.428135 | orchestrator | 2025-09-19 06:48:49.428144 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-19 06:48:49.428153 | orchestrator | Friday 19 September 2025 06:48:41 +0000 (0:00:01.430) 0:06:57.431 ****** 2025-09-19 06:48:49.428162 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:49.428170 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:49.428179 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:49.428188 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:49.428196 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:49.428205 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:49.428214 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:49.428223 | orchestrator | 2025-09-19 06:48:49.428231 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-19 06:48:49.428240 | orchestrator | Friday 19 September 2025 06:48:41 +0000 (0:00:00.498) 0:06:57.929 ****** 2025-09-19 06:48:49.428249 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:49.428258 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:49.428267 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:49.428275 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:49.428284 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:49.428293 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:49.428302 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:49.428310 | orchestrator | 2025-09-19 06:48:49.428324 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-19 06:49:20.009120 | orchestrator | Friday 19 September 2025 06:48:49 +0000 (0:00:07.616) 0:07:05.545 ****** 2025-09-19 06:49:20.009228 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.009246 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:20.009282 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:20.009294 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:20.009305 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:20.009316 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:20.009327 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:20.009337 | orchestrator | 2025-09-19 06:49:20.009350 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-19 06:49:20.009361 | orchestrator | Friday 19 September 2025 06:48:50 +0000 (0:00:01.229) 0:07:06.775 ****** 2025-09-19 06:49:20.009372 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.009383 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:20.009393 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:20.009404 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:20.009415 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:20.009426 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:20.009437 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:20.009447 | orchestrator | 2025-09-19 06:49:20.009458 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-19 06:49:20.009469 | orchestrator | Friday 19 September 2025 06:48:52 +0000 (0:00:01.616) 0:07:08.392 ****** 2025-09-19 06:49:20.009480 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.009491 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:20.009502 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:20.009512 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:20.009523 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:20.009598 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:20.009609 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:20.009620 | orchestrator | 2025-09-19 06:49:20.009631 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 06:49:20.009642 | orchestrator | Friday 19 September 2025 06:48:53 +0000 (0:00:01.592) 0:07:09.985 ****** 2025-09-19 06:49:20.009653 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.009664 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:20.009675 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:20.009686 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:20.009697 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:20.009708 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:20.009719 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:20.009730 | orchestrator | 2025-09-19 06:49:20.009741 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 06:49:20.009752 | orchestrator | Friday 19 September 2025 06:48:54 +0000 (0:00:00.761) 0:07:10.747 ****** 2025-09-19 06:49:20.009763 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:20.009774 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:20.009785 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:20.009795 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:20.009806 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:20.009817 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:20.009828 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:20.009838 | orchestrator | 2025-09-19 06:49:20.009849 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-19 06:49:20.009875 | orchestrator | Friday 19 September 2025 06:48:55 +0000 (0:00:00.713) 0:07:11.460 ****** 2025-09-19 06:49:20.009887 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:20.009898 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:20.009909 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:20.009920 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:20.009930 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:20.009941 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:20.009952 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:20.009962 | orchestrator | 2025-09-19 06:49:20.009973 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-19 06:49:20.009984 | orchestrator | Friday 19 September 2025 06:48:55 +0000 (0:00:00.448) 0:07:11.908 ****** 2025-09-19 06:49:20.010004 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.010068 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:20.010081 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:20.010092 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:20.010103 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:20.010114 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:20.010125 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:20.010135 | orchestrator | 2025-09-19 06:49:20.010179 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-19 06:49:20.010191 | orchestrator | Friday 19 September 2025 06:48:56 +0000 (0:00:00.549) 0:07:12.457 ****** 2025-09-19 06:49:20.010202 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.010213 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:20.010224 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:20.010235 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:20.010245 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:20.010256 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:20.010267 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:20.010278 | orchestrator | 2025-09-19 06:49:20.010289 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-19 06:49:20.010300 | orchestrator | Friday 19 September 2025 06:48:56 +0000 (0:00:00.460) 0:07:12.918 ****** 2025-09-19 06:49:20.010311 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.010322 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:20.010333 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:20.010343 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:20.010354 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:20.010365 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:20.010376 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:20.010387 | orchestrator | 2025-09-19 06:49:20.010398 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-19 06:49:20.010409 | orchestrator | Friday 19 September 2025 06:48:57 +0000 (0:00:00.441) 0:07:13.359 ****** 2025-09-19 06:49:20.010420 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.010431 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:20.010442 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:20.010452 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:20.010463 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:20.010474 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:20.010485 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:20.010496 | orchestrator | 2025-09-19 06:49:20.010507 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-19 06:49:20.010553 | orchestrator | Friday 19 September 2025 06:49:02 +0000 (0:00:05.637) 0:07:18.997 ****** 2025-09-19 06:49:20.010565 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:20.010577 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:20.010588 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:20.010599 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:20.010609 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:20.010620 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:20.010631 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:20.010642 | orchestrator | 2025-09-19 06:49:20.010653 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-19 06:49:20.010664 | orchestrator | Friday 19 September 2025 06:49:03 +0000 (0:00:00.535) 0:07:19.532 ****** 2025-09-19 06:49:20.010676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:49:20.010689 | orchestrator | 2025-09-19 06:49:20.010701 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-19 06:49:20.010711 | orchestrator | Friday 19 September 2025 06:49:04 +0000 (0:00:01.040) 0:07:20.573 ****** 2025-09-19 06:49:20.010722 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.010741 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:20.010752 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:20.010763 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:20.010774 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:20.010785 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:20.010795 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:20.010806 | orchestrator | 2025-09-19 06:49:20.010817 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-19 06:49:20.010828 | orchestrator | Friday 19 September 2025 06:49:06 +0000 (0:00:01.951) 0:07:22.524 ****** 2025-09-19 06:49:20.010839 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.010850 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:20.010860 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:20.010871 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:20.010882 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:20.010893 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:20.010903 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:20.010914 | orchestrator | 2025-09-19 06:49:20.010925 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-19 06:49:20.010936 | orchestrator | Friday 19 September 2025 06:49:07 +0000 (0:00:01.124) 0:07:23.649 ****** 2025-09-19 06:49:20.010947 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:20.010958 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:20.010968 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:20.010979 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:20.010990 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:20.011001 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:20.011011 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:20.011022 | orchestrator | 2025-09-19 06:49:20.011033 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-19 06:49:20.011044 | orchestrator | Friday 19 September 2025 06:49:08 +0000 (0:00:01.128) 0:07:24.777 ****** 2025-09-19 06:49:20.011055 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:20.011068 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:20.011088 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:20.011099 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:20.011110 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:20.011121 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:20.011132 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:20.011143 | orchestrator | 2025-09-19 06:49:20.011153 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-19 06:49:20.011164 | orchestrator | Friday 19 September 2025 06:49:10 +0000 (0:00:01.723) 0:07:26.501 ****** 2025-09-19 06:49:20.011176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:49:20.011187 | orchestrator | 2025-09-19 06:49:20.011198 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-19 06:49:20.011209 | orchestrator | Friday 19 September 2025 06:49:11 +0000 (0:00:00.814) 0:07:27.316 ****** 2025-09-19 06:49:20.011220 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:20.011231 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:20.011249 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:20.011260 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:20.011271 | orchestrator | changed: [testbed-manager] 2025-09-19 06:49:20.011282 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:20.011293 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:20.011304 | orchestrator | 2025-09-19 06:49:20.011315 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-19 06:49:20.011332 | orchestrator | Friday 19 September 2025 06:49:19 +0000 (0:00:08.812) 0:07:36.128 ****** 2025-09-19 06:49:35.171071 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:35.171180 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:35.171195 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:35.171207 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:35.171218 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:35.171229 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:35.171239 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:35.171251 | orchestrator | 2025-09-19 06:49:35.171263 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-19 06:49:35.171275 | orchestrator | Friday 19 September 2025 06:49:21 +0000 (0:00:01.653) 0:07:37.782 ****** 2025-09-19 06:49:35.171287 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:35.171298 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:35.171308 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:35.171319 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:35.171330 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:35.171340 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:35.171351 | orchestrator | 2025-09-19 06:49:35.171362 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-19 06:49:35.171373 | orchestrator | Friday 19 September 2025 06:49:22 +0000 (0:00:01.235) 0:07:39.017 ****** 2025-09-19 06:49:35.171384 | orchestrator | changed: [testbed-manager] 2025-09-19 06:49:35.171396 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:35.171406 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:35.171417 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:35.171428 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:35.171439 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:35.171450 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:35.171460 | orchestrator | 2025-09-19 06:49:35.171471 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-19 06:49:35.171482 | orchestrator | 2025-09-19 06:49:35.171493 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-19 06:49:35.171504 | orchestrator | Friday 19 September 2025 06:49:24 +0000 (0:00:01.265) 0:07:40.283 ****** 2025-09-19 06:49:35.171515 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:35.171526 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:35.171560 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:35.171571 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:35.171583 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:35.171594 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:35.171607 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:35.171619 | orchestrator | 2025-09-19 06:49:35.171632 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-19 06:49:35.171644 | orchestrator | 2025-09-19 06:49:35.171657 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-19 06:49:35.171670 | orchestrator | Friday 19 September 2025 06:49:24 +0000 (0:00:00.452) 0:07:40.736 ****** 2025-09-19 06:49:35.171682 | orchestrator | changed: [testbed-manager] 2025-09-19 06:49:35.171694 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:35.171706 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:35.171718 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:35.171731 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:35.171743 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:35.171756 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:35.171792 | orchestrator | 2025-09-19 06:49:35.171819 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-19 06:49:35.171832 | orchestrator | Friday 19 September 2025 06:49:25 +0000 (0:00:01.244) 0:07:41.981 ****** 2025-09-19 06:49:35.171844 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:35.171856 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:35.171868 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:35.171880 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:35.171893 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:35.171905 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:35.171917 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:35.171929 | orchestrator | 2025-09-19 06:49:35.171942 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-19 06:49:35.171955 | orchestrator | Friday 19 September 2025 06:49:27 +0000 (0:00:01.452) 0:07:43.433 ****** 2025-09-19 06:49:35.171968 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:35.171978 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:35.171989 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:35.172000 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:35.172011 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:35.172022 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:35.172032 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:35.172043 | orchestrator | 2025-09-19 06:49:35.172054 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-19 06:49:35.172065 | orchestrator | Friday 19 September 2025 06:49:28 +0000 (0:00:00.784) 0:07:44.218 ****** 2025-09-19 06:49:35.172076 | orchestrator | changed: [testbed-manager] 2025-09-19 06:49:35.172087 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:35.172098 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:35.172108 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:35.172119 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:35.172130 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:35.172141 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:35.172151 | orchestrator | 2025-09-19 06:49:35.172162 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-19 06:49:35.172173 | orchestrator | 2025-09-19 06:49:35.172184 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-19 06:49:35.172195 | orchestrator | Friday 19 September 2025 06:49:29 +0000 (0:00:01.212) 0:07:45.431 ****** 2025-09-19 06:49:35.172206 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:49:35.172219 | orchestrator | 2025-09-19 06:49:35.172230 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 06:49:35.172241 | orchestrator | Friday 19 September 2025 06:49:30 +0000 (0:00:00.868) 0:07:46.299 ****** 2025-09-19 06:49:35.172252 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:35.172263 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:35.172274 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:35.172285 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:35.172296 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:35.172307 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:35.172317 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:35.172328 | orchestrator | 2025-09-19 06:49:35.172356 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 06:49:35.172368 | orchestrator | Friday 19 September 2025 06:49:30 +0000 (0:00:00.741) 0:07:47.041 ****** 2025-09-19 06:49:35.172379 | orchestrator | changed: [testbed-manager] 2025-09-19 06:49:35.172390 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:35.172401 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:35.172413 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:35.172424 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:35.172434 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:35.172445 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:35.172456 | orchestrator | 2025-09-19 06:49:35.172490 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-19 06:49:35.172501 | orchestrator | Friday 19 September 2025 06:49:32 +0000 (0:00:01.113) 0:07:48.154 ****** 2025-09-19 06:49:35.172513 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:49:35.172524 | orchestrator | 2025-09-19 06:49:35.172571 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 06:49:35.172583 | orchestrator | Friday 19 September 2025 06:49:33 +0000 (0:00:01.040) 0:07:49.194 ****** 2025-09-19 06:49:35.172594 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:35.172605 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:35.172616 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:35.172627 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:35.172638 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:35.172649 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:35.172659 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:35.172670 | orchestrator | 2025-09-19 06:49:35.172681 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 06:49:35.172692 | orchestrator | Friday 19 September 2025 06:49:33 +0000 (0:00:00.884) 0:07:50.079 ****** 2025-09-19 06:49:35.172704 | orchestrator | changed: [testbed-manager] 2025-09-19 06:49:35.172715 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:35.172726 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:35.172737 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:35.172748 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:35.172758 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:35.172769 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:35.172780 | orchestrator | 2025-09-19 06:49:35.172791 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:49:35.172803 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-19 06:49:35.172815 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-19 06:49:35.172826 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 06:49:35.172843 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 06:49:35.172855 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 06:49:35.172866 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 06:49:35.172877 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 06:49:35.172888 | orchestrator | 2025-09-19 06:49:35.172899 | orchestrator | 2025-09-19 06:49:35.172910 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:49:35.172921 | orchestrator | Friday 19 September 2025 06:49:35 +0000 (0:00:01.191) 0:07:51.271 ****** 2025-09-19 06:49:35.172932 | orchestrator | =============================================================================== 2025-09-19 06:49:35.172943 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.83s 2025-09-19 06:49:35.172954 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.46s 2025-09-19 06:49:35.172965 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.19s 2025-09-19 06:49:35.172975 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.38s 2025-09-19 06:49:35.172994 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.27s 2025-09-19 06:49:35.173005 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.34s 2025-09-19 06:49:35.173017 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.77s 2025-09-19 06:49:35.173028 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.85s 2025-09-19 06:49:35.173039 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.81s 2025-09-19 06:49:35.173050 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.73s 2025-09-19 06:49:35.173061 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.51s 2025-09-19 06:49:35.173072 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.26s 2025-09-19 06:49:35.173083 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.16s 2025-09-19 06:49:35.173093 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.85s 2025-09-19 06:49:35.173111 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.78s 2025-09-19 06:49:35.617633 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.62s 2025-09-19 06:49:35.617731 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.36s 2025-09-19 06:49:35.617745 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.86s 2025-09-19 06:49:35.617756 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.72s 2025-09-19 06:49:35.617768 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.71s 2025-09-19 06:49:35.803921 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 06:49:35.804006 | orchestrator | + osism apply network 2025-09-19 06:49:48.260022 | orchestrator | 2025-09-19 06:49:48 | INFO  | Task 339a62fb-47f6-47d1-a10e-f48bbf086f54 (network) was prepared for execution. 2025-09-19 06:49:48.260134 | orchestrator | 2025-09-19 06:49:48 | INFO  | It takes a moment until task 339a62fb-47f6-47d1-a10e-f48bbf086f54 (network) has been started and output is visible here. 2025-09-19 06:50:16.487137 | orchestrator | 2025-09-19 06:50:16.487255 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-19 06:50:16.487274 | orchestrator | 2025-09-19 06:50:16.487286 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-19 06:50:16.487298 | orchestrator | Friday 19 September 2025 06:49:52 +0000 (0:00:00.300) 0:00:00.300 ****** 2025-09-19 06:50:16.487310 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:16.487322 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:16.487333 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:16.487345 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:16.487356 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:16.487367 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:16.487377 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:16.487388 | orchestrator | 2025-09-19 06:50:16.487399 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-19 06:50:16.487410 | orchestrator | Friday 19 September 2025 06:49:53 +0000 (0:00:00.628) 0:00:00.928 ****** 2025-09-19 06:50:16.487423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:50:16.487437 | orchestrator | 2025-09-19 06:50:16.487448 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-19 06:50:16.487459 | orchestrator | Friday 19 September 2025 06:49:54 +0000 (0:00:01.042) 0:00:01.971 ****** 2025-09-19 06:50:16.487470 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:16.487481 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:16.487492 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:16.487503 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:16.487513 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:16.487593 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:16.487634 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:16.487654 | orchestrator | 2025-09-19 06:50:16.487670 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-19 06:50:16.487683 | orchestrator | Friday 19 September 2025 06:49:56 +0000 (0:00:01.860) 0:00:03.831 ****** 2025-09-19 06:50:16.487695 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:16.487708 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:16.487721 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:16.487733 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:16.487747 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:16.487760 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:16.487772 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:16.487784 | orchestrator | 2025-09-19 06:50:16.487797 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-19 06:50:16.487809 | orchestrator | Friday 19 September 2025 06:49:57 +0000 (0:00:01.585) 0:00:05.416 ****** 2025-09-19 06:50:16.487822 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-19 06:50:16.487835 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-19 06:50:16.487848 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-19 06:50:16.487860 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-19 06:50:16.487871 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-19 06:50:16.487882 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-19 06:50:16.487893 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-19 06:50:16.487904 | orchestrator | 2025-09-19 06:50:16.487914 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-19 06:50:16.487926 | orchestrator | Friday 19 September 2025 06:49:58 +0000 (0:00:00.891) 0:00:06.308 ****** 2025-09-19 06:50:16.487936 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 06:50:16.487948 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 06:50:16.487959 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 06:50:16.487969 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 06:50:16.487980 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 06:50:16.487991 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 06:50:16.488001 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 06:50:16.488012 | orchestrator | 2025-09-19 06:50:16.488023 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-19 06:50:16.488034 | orchestrator | Friday 19 September 2025 06:50:01 +0000 (0:00:03.203) 0:00:09.511 ****** 2025-09-19 06:50:16.488045 | orchestrator | changed: [testbed-manager] 2025-09-19 06:50:16.488056 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:16.488067 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:16.488077 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:16.488088 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:16.488099 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:16.488109 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:16.488120 | orchestrator | 2025-09-19 06:50:16.488131 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-19 06:50:16.488142 | orchestrator | Friday 19 September 2025 06:50:03 +0000 (0:00:01.523) 0:00:11.035 ****** 2025-09-19 06:50:16.488153 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 06:50:16.488164 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 06:50:16.488175 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 06:50:16.488185 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 06:50:16.488196 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 06:50:16.488207 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 06:50:16.488218 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 06:50:16.488228 | orchestrator | 2025-09-19 06:50:16.488239 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-19 06:50:16.488250 | orchestrator | Friday 19 September 2025 06:50:05 +0000 (0:00:02.168) 0:00:13.203 ****** 2025-09-19 06:50:16.488270 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:16.488281 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:16.488292 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:16.488303 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:16.488314 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:16.488324 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:16.488335 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:16.488346 | orchestrator | 2025-09-19 06:50:16.488357 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-19 06:50:16.488384 | orchestrator | Friday 19 September 2025 06:50:06 +0000 (0:00:01.110) 0:00:14.313 ****** 2025-09-19 06:50:16.488396 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:50:16.488407 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:50:16.488418 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:50:16.488429 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:50:16.488439 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:50:16.488450 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:50:16.488461 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:50:16.488472 | orchestrator | 2025-09-19 06:50:16.488483 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-19 06:50:16.488494 | orchestrator | Friday 19 September 2025 06:50:07 +0000 (0:00:00.706) 0:00:15.019 ****** 2025-09-19 06:50:16.488504 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:16.488515 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:16.488550 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:16.488562 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:16.488573 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:16.488584 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:16.488595 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:16.488606 | orchestrator | 2025-09-19 06:50:16.488617 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-19 06:50:16.488628 | orchestrator | Friday 19 September 2025 06:50:09 +0000 (0:00:02.144) 0:00:17.164 ****** 2025-09-19 06:50:16.488639 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:50:16.488650 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:50:16.488661 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:50:16.488672 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:50:16.488682 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:50:16.488693 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:50:16.488705 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-19 06:50:16.488717 | orchestrator | 2025-09-19 06:50:16.488734 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-19 06:50:16.488745 | orchestrator | Friday 19 September 2025 06:50:10 +0000 (0:00:00.879) 0:00:18.044 ****** 2025-09-19 06:50:16.488756 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:16.488767 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:16.488778 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:16.488789 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:16.488800 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:16.488811 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:16.488822 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:16.488833 | orchestrator | 2025-09-19 06:50:16.488844 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-19 06:50:16.488855 | orchestrator | Friday 19 September 2025 06:50:12 +0000 (0:00:01.636) 0:00:19.680 ****** 2025-09-19 06:50:16.488866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:50:16.488879 | orchestrator | 2025-09-19 06:50:16.488890 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 06:50:16.488909 | orchestrator | Friday 19 September 2025 06:50:13 +0000 (0:00:01.314) 0:00:20.995 ****** 2025-09-19 06:50:16.488920 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:16.488931 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:16.488942 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:16.488953 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:16.488964 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:16.488975 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:16.488986 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:16.488997 | orchestrator | 2025-09-19 06:50:16.489008 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-19 06:50:16.489019 | orchestrator | Friday 19 September 2025 06:50:14 +0000 (0:00:00.979) 0:00:21.974 ****** 2025-09-19 06:50:16.489030 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:16.489041 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:16.489051 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:16.489062 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:16.489073 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:16.489084 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:16.489095 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:16.489106 | orchestrator | 2025-09-19 06:50:16.489117 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 06:50:16.489128 | orchestrator | Friday 19 September 2025 06:50:15 +0000 (0:00:00.823) 0:00:22.797 ****** 2025-09-19 06:50:16.489139 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:16.489150 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:16.489161 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:16.489171 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:16.489182 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:16.489193 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:16.489204 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:16.489215 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:16.489226 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:16.489237 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:16.489247 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:16.489258 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:16.489269 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:16.489280 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:16.489291 | orchestrator | 2025-09-19 06:50:16.489309 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-19 06:50:33.296486 | orchestrator | Friday 19 September 2025 06:50:16 +0000 (0:00:01.229) 0:00:24.027 ****** 2025-09-19 06:50:33.296647 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:50:33.296665 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:50:33.296677 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:50:33.296689 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:50:33.296700 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:50:33.296711 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:50:33.296722 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:50:33.296733 | orchestrator | 2025-09-19 06:50:33.296746 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-19 06:50:33.296757 | orchestrator | Friday 19 September 2025 06:50:17 +0000 (0:00:00.648) 0:00:24.675 ****** 2025-09-19 06:50:33.296769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-4, testbed-node-1, testbed-manager, testbed-node-3, testbed-node-2, testbed-node-5 2025-09-19 06:50:33.296806 | orchestrator | 2025-09-19 06:50:33.296818 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-19 06:50:33.296829 | orchestrator | Friday 19 September 2025 06:50:21 +0000 (0:00:04.640) 0:00:29.316 ****** 2025-09-19 06:50:33.296842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.296855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.296867 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.296893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.296904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.296915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.296927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.296938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.296949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.296961 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.296972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.297000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.297012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.297033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.297046 | orchestrator | 2025-09-19 06:50:33.297071 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-19 06:50:33.297085 | orchestrator | Friday 19 September 2025 06:50:27 +0000 (0:00:05.713) 0:00:35.029 ****** 2025-09-19 06:50:33.297099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.297118 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.297131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.297144 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.297157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.297170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.297183 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.297197 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.297210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:50:33.297224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.297237 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.297249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:33.297280 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:39.682303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:50:39.682402 | orchestrator | 2025-09-19 06:50:39.682415 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-19 06:50:39.682427 | orchestrator | Friday 19 September 2025 06:50:33 +0000 (0:00:05.805) 0:00:40.835 ****** 2025-09-19 06:50:39.682437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:50:39.682447 | orchestrator | 2025-09-19 06:50:39.682456 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 06:50:39.682465 | orchestrator | Friday 19 September 2025 06:50:34 +0000 (0:00:01.331) 0:00:42.167 ****** 2025-09-19 06:50:39.682474 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:39.682484 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:39.682493 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:39.682501 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:39.682510 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:39.682518 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:39.682611 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:39.682625 | orchestrator | 2025-09-19 06:50:39.682634 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 06:50:39.682643 | orchestrator | Friday 19 September 2025 06:50:35 +0000 (0:00:01.227) 0:00:43.394 ****** 2025-09-19 06:50:39.682652 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:50:39.682662 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:50:39.682671 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:50:39.682680 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:50:39.682689 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:50:39.682699 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:50:39.682708 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:50:39.682717 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:50:39.682726 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:50:39.682734 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:50:39.682743 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:50:39.682752 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:50:39.682761 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:50:39.682770 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:50:39.682778 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:50:39.682787 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:50:39.682796 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:50:39.682823 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:50:39.682832 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:50:39.682840 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:50:39.682849 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:50:39.682860 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:50:39.682871 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:50:39.682881 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:50:39.682891 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:50:39.682901 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:50:39.682911 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:50:39.682922 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:50:39.682932 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:50:39.682942 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:50:39.682952 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:50:39.682963 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:50:39.682973 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:50:39.682983 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:50:39.682993 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:50:39.683003 | orchestrator | 2025-09-19 06:50:39.683013 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-19 06:50:39.683038 | orchestrator | Friday 19 September 2025 06:50:37 +0000 (0:00:02.089) 0:00:45.484 ****** 2025-09-19 06:50:39.683049 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:50:39.683060 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:50:39.683070 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:50:39.683080 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:50:39.683089 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:50:39.683099 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:50:39.683110 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:50:39.683119 | orchestrator | 2025-09-19 06:50:39.683129 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-19 06:50:39.683139 | orchestrator | Friday 19 September 2025 06:50:38 +0000 (0:00:00.659) 0:00:46.143 ****** 2025-09-19 06:50:39.683149 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:50:39.683159 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:50:39.683169 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:50:39.683179 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:50:39.683189 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:50:39.683200 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:50:39.683210 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:50:39.683218 | orchestrator | 2025-09-19 06:50:39.683227 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:50:39.683237 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 06:50:39.683251 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:50:39.683261 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:50:39.683270 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:50:39.683285 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:50:39.683294 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:50:39.683302 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:50:39.683311 | orchestrator | 2025-09-19 06:50:39.683320 | orchestrator | 2025-09-19 06:50:39.683329 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:50:39.683338 | orchestrator | Friday 19 September 2025 06:50:39 +0000 (0:00:00.719) 0:00:46.863 ****** 2025-09-19 06:50:39.683346 | orchestrator | =============================================================================== 2025-09-19 06:50:39.683355 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.81s 2025-09-19 06:50:39.683364 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.71s 2025-09-19 06:50:39.683373 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.64s 2025-09-19 06:50:39.683381 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.20s 2025-09-19 06:50:39.683390 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.17s 2025-09-19 06:50:39.683399 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.15s 2025-09-19 06:50:39.683407 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.09s 2025-09-19 06:50:39.683416 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.86s 2025-09-19 06:50:39.683425 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.64s 2025-09-19 06:50:39.683433 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.59s 2025-09-19 06:50:39.683442 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.52s 2025-09-19 06:50:39.683451 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.33s 2025-09-19 06:50:39.683459 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2025-09-19 06:50:39.683468 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.23s 2025-09-19 06:50:39.683477 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.23s 2025-09-19 06:50:39.683486 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2025-09-19 06:50:39.683494 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.04s 2025-09-19 06:50:39.683503 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2025-09-19 06:50:39.683512 | orchestrator | osism.commons.network : Create required directories --------------------- 0.89s 2025-09-19 06:50:39.683521 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.88s 2025-09-19 06:50:39.951851 | orchestrator | + osism apply wireguard 2025-09-19 06:50:51.781359 | orchestrator | 2025-09-19 06:50:51 | INFO  | Task fefc003e-d9ca-4254-9491-3576aeb8b348 (wireguard) was prepared for execution. 2025-09-19 06:50:51.781478 | orchestrator | 2025-09-19 06:50:51 | INFO  | It takes a moment until task fefc003e-d9ca-4254-9491-3576aeb8b348 (wireguard) has been started and output is visible here. 2025-09-19 06:51:11.665847 | orchestrator | 2025-09-19 06:51:11.665977 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-19 06:51:11.665996 | orchestrator | 2025-09-19 06:51:11.666008 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-19 06:51:11.666075 | orchestrator | Friday 19 September 2025 06:50:55 +0000 (0:00:00.226) 0:00:00.226 ****** 2025-09-19 06:51:11.666087 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:11.666123 | orchestrator | 2025-09-19 06:51:11.666165 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-19 06:51:11.666179 | orchestrator | Friday 19 September 2025 06:50:57 +0000 (0:00:01.601) 0:00:01.828 ****** 2025-09-19 06:51:11.666199 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:11.666218 | orchestrator | 2025-09-19 06:51:11.666237 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-19 06:51:11.666256 | orchestrator | Friday 19 September 2025 06:51:03 +0000 (0:00:06.630) 0:00:08.458 ****** 2025-09-19 06:51:11.666274 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:11.666293 | orchestrator | 2025-09-19 06:51:11.666312 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-19 06:51:11.666332 | orchestrator | Friday 19 September 2025 06:51:04 +0000 (0:00:00.560) 0:00:09.019 ****** 2025-09-19 06:51:11.666351 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:11.666371 | orchestrator | 2025-09-19 06:51:11.666387 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-19 06:51:11.666411 | orchestrator | Friday 19 September 2025 06:51:05 +0000 (0:00:00.470) 0:00:09.490 ****** 2025-09-19 06:51:11.666422 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:11.666433 | orchestrator | 2025-09-19 06:51:11.666444 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-19 06:51:11.666455 | orchestrator | Friday 19 September 2025 06:51:05 +0000 (0:00:00.551) 0:00:10.042 ****** 2025-09-19 06:51:11.666466 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:11.666477 | orchestrator | 2025-09-19 06:51:11.666488 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-19 06:51:11.666499 | orchestrator | Friday 19 September 2025 06:51:06 +0000 (0:00:00.528) 0:00:10.570 ****** 2025-09-19 06:51:11.666510 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:11.666520 | orchestrator | 2025-09-19 06:51:11.666565 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-19 06:51:11.666577 | orchestrator | Friday 19 September 2025 06:51:06 +0000 (0:00:00.416) 0:00:10.987 ****** 2025-09-19 06:51:11.666588 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:11.666599 | orchestrator | 2025-09-19 06:51:11.666610 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-19 06:51:11.666621 | orchestrator | Friday 19 September 2025 06:51:07 +0000 (0:00:01.257) 0:00:12.244 ****** 2025-09-19 06:51:11.666632 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:51:11.666643 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:11.666654 | orchestrator | 2025-09-19 06:51:11.666665 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-19 06:51:11.666676 | orchestrator | Friday 19 September 2025 06:51:08 +0000 (0:00:00.979) 0:00:13.224 ****** 2025-09-19 06:51:11.666687 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:11.666698 | orchestrator | 2025-09-19 06:51:11.666709 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-19 06:51:11.666721 | orchestrator | Friday 19 September 2025 06:51:10 +0000 (0:00:01.607) 0:00:14.831 ****** 2025-09-19 06:51:11.666732 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:11.666742 | orchestrator | 2025-09-19 06:51:11.666753 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:51:11.666765 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:51:11.666776 | orchestrator | 2025-09-19 06:51:11.666787 | orchestrator | 2025-09-19 06:51:11.666798 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:51:11.666809 | orchestrator | Friday 19 September 2025 06:51:11 +0000 (0:00:00.963) 0:00:15.795 ****** 2025-09-19 06:51:11.666820 | orchestrator | =============================================================================== 2025-09-19 06:51:11.666830 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.63s 2025-09-19 06:51:11.666841 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.61s 2025-09-19 06:51:11.666863 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.60s 2025-09-19 06:51:11.666874 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.26s 2025-09-19 06:51:11.666893 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.98s 2025-09-19 06:51:11.666911 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-09-19 06:51:11.666929 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-09-19 06:51:11.666946 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.55s 2025-09-19 06:51:11.666964 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-09-19 06:51:11.666985 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2025-09-19 06:51:11.666998 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-09-19 06:51:11.929104 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-19 06:51:11.966466 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-19 06:51:11.966574 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-19 06:51:12.051220 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 176 0 --:--:-- --:--:-- --:--:-- 176 100 15 100 15 0 0 175 0 --:--:-- --:--:-- --:--:-- 174 2025-09-19 06:51:12.067279 | orchestrator | + osism apply --environment custom workarounds 2025-09-19 06:51:13.899451 | orchestrator | 2025-09-19 06:51:13 | INFO  | Trying to run play workarounds in environment custom 2025-09-19 06:51:23.990000 | orchestrator | 2025-09-19 06:51:23 | INFO  | Task 35c9bd88-b87b-4a9f-81e5-cea6c764979d (workarounds) was prepared for execution. 2025-09-19 06:51:23.990478 | orchestrator | 2025-09-19 06:51:23 | INFO  | It takes a moment until task 35c9bd88-b87b-4a9f-81e5-cea6c764979d (workarounds) has been started and output is visible here. 2025-09-19 06:51:49.276063 | orchestrator | 2025-09-19 06:51:49.276158 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 06:51:49.276170 | orchestrator | 2025-09-19 06:51:49.276178 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-19 06:51:49.276185 | orchestrator | Friday 19 September 2025 06:51:27 +0000 (0:00:00.145) 0:00:00.145 ****** 2025-09-19 06:51:49.276193 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-19 06:51:49.276200 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-19 06:51:49.276207 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-19 06:51:49.276226 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-19 06:51:49.276233 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-19 06:51:49.276240 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-19 06:51:49.276247 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-19 06:51:49.276253 | orchestrator | 2025-09-19 06:51:49.276260 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-19 06:51:49.276267 | orchestrator | 2025-09-19 06:51:49.276273 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 06:51:49.276280 | orchestrator | Friday 19 September 2025 06:51:28 +0000 (0:00:00.822) 0:00:00.967 ****** 2025-09-19 06:51:49.276287 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:49.276295 | orchestrator | 2025-09-19 06:51:49.276302 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-19 06:51:49.276309 | orchestrator | 2025-09-19 06:51:49.276315 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 06:51:49.276340 | orchestrator | Friday 19 September 2025 06:51:30 +0000 (0:00:02.453) 0:00:03.420 ****** 2025-09-19 06:51:49.276348 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:51:49.276355 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:51:49.276362 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:51:49.276369 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:51:49.276375 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:51:49.276382 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:51:49.276389 | orchestrator | 2025-09-19 06:51:49.276395 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-19 06:51:49.276402 | orchestrator | 2025-09-19 06:51:49.276409 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-19 06:51:49.276416 | orchestrator | Friday 19 September 2025 06:51:32 +0000 (0:00:01.815) 0:00:05.236 ****** 2025-09-19 06:51:49.276423 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:51:49.276431 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:51:49.276438 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:51:49.276445 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:51:49.276452 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:51:49.276458 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:51:49.276465 | orchestrator | 2025-09-19 06:51:49.276472 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-19 06:51:49.276479 | orchestrator | Friday 19 September 2025 06:51:34 +0000 (0:00:01.513) 0:00:06.750 ****** 2025-09-19 06:51:49.276485 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:51:49.276493 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:51:49.276499 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:51:49.276506 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:51:49.276513 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:51:49.276519 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:51:49.276576 | orchestrator | 2025-09-19 06:51:49.276583 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-19 06:51:49.276590 | orchestrator | Friday 19 September 2025 06:51:38 +0000 (0:00:03.776) 0:00:10.527 ****** 2025-09-19 06:51:49.276597 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:51:49.276603 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:51:49.276610 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:51:49.276617 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:51:49.276623 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:51:49.276632 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:51:49.276639 | orchestrator | 2025-09-19 06:51:49.276647 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-19 06:51:49.276655 | orchestrator | 2025-09-19 06:51:49.276662 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-19 06:51:49.276670 | orchestrator | Friday 19 September 2025 06:51:38 +0000 (0:00:00.739) 0:00:11.266 ****** 2025-09-19 06:51:49.276678 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:49.276685 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:51:49.276693 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:51:49.276700 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:51:49.276708 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:51:49.276716 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:51:49.276723 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:51:49.276731 | orchestrator | 2025-09-19 06:51:49.276739 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-19 06:51:49.276746 | orchestrator | Friday 19 September 2025 06:51:40 +0000 (0:00:01.650) 0:00:12.917 ****** 2025-09-19 06:51:49.276760 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:49.276768 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:51:49.276775 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:51:49.276783 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:51:49.276791 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:51:49.276798 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:51:49.276818 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:51:49.276826 | orchestrator | 2025-09-19 06:51:49.276834 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-19 06:51:49.276842 | orchestrator | Friday 19 September 2025 06:51:41 +0000 (0:00:01.584) 0:00:14.501 ****** 2025-09-19 06:51:49.276849 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:51:49.276857 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:49.276865 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:51:49.276872 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:51:49.276879 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:51:49.276886 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:51:49.276892 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:51:49.276899 | orchestrator | 2025-09-19 06:51:49.276905 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-19 06:51:49.276916 | orchestrator | Friday 19 September 2025 06:51:43 +0000 (0:00:01.479) 0:00:15.980 ****** 2025-09-19 06:51:49.276923 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:51:49.276930 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:51:49.276936 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:51:49.276943 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:51:49.276949 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:51:49.276956 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:51:49.276962 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:49.276969 | orchestrator | 2025-09-19 06:51:49.276975 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-19 06:51:49.276982 | orchestrator | Friday 19 September 2025 06:51:45 +0000 (0:00:02.101) 0:00:18.082 ****** 2025-09-19 06:51:49.276988 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:51:49.276995 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:51:49.277001 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:51:49.277008 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:51:49.277014 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:51:49.277021 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:51:49.277027 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:51:49.277034 | orchestrator | 2025-09-19 06:51:49.277041 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-19 06:51:49.277047 | orchestrator | 2025-09-19 06:51:49.277054 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-19 06:51:49.277060 | orchestrator | Friday 19 September 2025 06:51:46 +0000 (0:00:00.635) 0:00:18.718 ****** 2025-09-19 06:51:49.277067 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:51:49.277074 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:51:49.277080 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:49.277087 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:51:49.277093 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:51:49.277100 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:51:49.277106 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:51:49.277113 | orchestrator | 2025-09-19 06:51:49.277119 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:51:49.277127 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:51:49.277135 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:51:49.277142 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:51:49.277153 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:51:49.277160 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:51:49.277167 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:51:49.277173 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:51:49.277180 | orchestrator | 2025-09-19 06:51:49.277186 | orchestrator | 2025-09-19 06:51:49.277193 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:51:49.277199 | orchestrator | Friday 19 September 2025 06:51:49 +0000 (0:00:03.034) 0:00:21.752 ****** 2025-09-19 06:51:49.277206 | orchestrator | =============================================================================== 2025-09-19 06:51:49.277213 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.78s 2025-09-19 06:51:49.277219 | orchestrator | Install python3-docker -------------------------------------------------- 3.03s 2025-09-19 06:51:49.277226 | orchestrator | Apply netplan configuration --------------------------------------------- 2.45s 2025-09-19 06:51:49.277232 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.10s 2025-09-19 06:51:49.277239 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2025-09-19 06:51:49.277246 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2025-09-19 06:51:49.277252 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2025-09-19 06:51:49.277259 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2025-09-19 06:51:49.277265 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.48s 2025-09-19 06:51:49.277272 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2025-09-19 06:51:49.277278 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.74s 2025-09-19 06:51:49.277288 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2025-09-19 06:51:49.874254 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-19 06:52:01.780645 | orchestrator | 2025-09-19 06:52:01 | INFO  | Task 82d4a357-22c6-4b29-87c4-ccef471e0667 (reboot) was prepared for execution. 2025-09-19 06:52:01.780758 | orchestrator | 2025-09-19 06:52:01 | INFO  | It takes a moment until task 82d4a357-22c6-4b29-87c4-ccef471e0667 (reboot) has been started and output is visible here. 2025-09-19 06:52:11.121362 | orchestrator | 2025-09-19 06:52:11.121448 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:11.121458 | orchestrator | 2025-09-19 06:52:11.121464 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:11.121472 | orchestrator | Friday 19 September 2025 06:52:05 +0000 (0:00:00.194) 0:00:00.194 ****** 2025-09-19 06:52:11.121478 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:52:11.121485 | orchestrator | 2025-09-19 06:52:11.121491 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:11.121497 | orchestrator | Friday 19 September 2025 06:52:05 +0000 (0:00:00.089) 0:00:00.284 ****** 2025-09-19 06:52:11.121503 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:52:11.121509 | orchestrator | 2025-09-19 06:52:11.121515 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:11.121561 | orchestrator | Friday 19 September 2025 06:52:06 +0000 (0:00:00.891) 0:00:01.175 ****** 2025-09-19 06:52:11.121572 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:52:11.121578 | orchestrator | 2025-09-19 06:52:11.121584 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:11.121609 | orchestrator | 2025-09-19 06:52:11.121615 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:11.121621 | orchestrator | Friday 19 September 2025 06:52:06 +0000 (0:00:00.105) 0:00:01.281 ****** 2025-09-19 06:52:11.121627 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:52:11.121633 | orchestrator | 2025-09-19 06:52:11.121639 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:11.121645 | orchestrator | Friday 19 September 2025 06:52:06 +0000 (0:00:00.104) 0:00:01.385 ****** 2025-09-19 06:52:11.121651 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:52:11.121656 | orchestrator | 2025-09-19 06:52:11.121672 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:11.121678 | orchestrator | Friday 19 September 2025 06:52:07 +0000 (0:00:00.671) 0:00:02.056 ****** 2025-09-19 06:52:11.121690 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:52:11.121696 | orchestrator | 2025-09-19 06:52:11.121702 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:11.121708 | orchestrator | 2025-09-19 06:52:11.121714 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:11.121740 | orchestrator | Friday 19 September 2025 06:52:07 +0000 (0:00:00.113) 0:00:02.170 ****** 2025-09-19 06:52:11.121746 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:52:11.121752 | orchestrator | 2025-09-19 06:52:11.121758 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:11.121764 | orchestrator | Friday 19 September 2025 06:52:07 +0000 (0:00:00.177) 0:00:02.347 ****** 2025-09-19 06:52:11.121770 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:52:11.121776 | orchestrator | 2025-09-19 06:52:11.121785 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:11.121792 | orchestrator | Friday 19 September 2025 06:52:08 +0000 (0:00:00.656) 0:00:03.004 ****** 2025-09-19 06:52:11.121799 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:52:11.121805 | orchestrator | 2025-09-19 06:52:11.121811 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:11.121817 | orchestrator | 2025-09-19 06:52:11.121824 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:11.121830 | orchestrator | Friday 19 September 2025 06:52:08 +0000 (0:00:00.097) 0:00:03.101 ****** 2025-09-19 06:52:11.121836 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:52:11.121843 | orchestrator | 2025-09-19 06:52:11.121849 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:11.121855 | orchestrator | Friday 19 September 2025 06:52:08 +0000 (0:00:00.090) 0:00:03.191 ****** 2025-09-19 06:52:11.121861 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:52:11.121868 | orchestrator | 2025-09-19 06:52:11.121874 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:11.121880 | orchestrator | Friday 19 September 2025 06:52:09 +0000 (0:00:00.612) 0:00:03.803 ****** 2025-09-19 06:52:11.121886 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:52:11.121893 | orchestrator | 2025-09-19 06:52:11.121899 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:11.121905 | orchestrator | 2025-09-19 06:52:11.121911 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:11.121918 | orchestrator | Friday 19 September 2025 06:52:09 +0000 (0:00:00.099) 0:00:03.903 ****** 2025-09-19 06:52:11.121925 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:52:11.121933 | orchestrator | 2025-09-19 06:52:11.121940 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:11.121947 | orchestrator | Friday 19 September 2025 06:52:09 +0000 (0:00:00.091) 0:00:03.994 ****** 2025-09-19 06:52:11.121954 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:52:11.121961 | orchestrator | 2025-09-19 06:52:11.121968 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:11.121982 | orchestrator | Friday 19 September 2025 06:52:09 +0000 (0:00:00.656) 0:00:04.651 ****** 2025-09-19 06:52:11.121989 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:52:11.121996 | orchestrator | 2025-09-19 06:52:11.122003 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:11.122011 | orchestrator | 2025-09-19 06:52:11.122079 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:11.122087 | orchestrator | Friday 19 September 2025 06:52:10 +0000 (0:00:00.130) 0:00:04.782 ****** 2025-09-19 06:52:11.122095 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:52:11.122102 | orchestrator | 2025-09-19 06:52:11.122109 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:11.122116 | orchestrator | Friday 19 September 2025 06:52:10 +0000 (0:00:00.105) 0:00:04.888 ****** 2025-09-19 06:52:11.122123 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:52:11.122131 | orchestrator | 2025-09-19 06:52:11.122138 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:11.122145 | orchestrator | Friday 19 September 2025 06:52:10 +0000 (0:00:00.661) 0:00:05.550 ****** 2025-09-19 06:52:11.122164 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:52:11.122171 | orchestrator | 2025-09-19 06:52:11.122183 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:52:11.122191 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:11.122199 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:11.122207 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:11.122214 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:11.122221 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:11.122228 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:11.122236 | orchestrator | 2025-09-19 06:52:11.122243 | orchestrator | 2025-09-19 06:52:11.122250 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:52:11.122258 | orchestrator | Friday 19 September 2025 06:52:10 +0000 (0:00:00.037) 0:00:05.587 ****** 2025-09-19 06:52:11.122265 | orchestrator | =============================================================================== 2025-09-19 06:52:11.122272 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.15s 2025-09-19 06:52:11.122279 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.66s 2025-09-19 06:52:11.122287 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2025-09-19 06:52:11.381639 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-19 06:52:23.302697 | orchestrator | 2025-09-19 06:52:23 | INFO  | Task 34e9f8bc-9d51-4fc0-a3ee-9673fbe91851 (wait-for-connection) was prepared for execution. 2025-09-19 06:52:23.302811 | orchestrator | 2025-09-19 06:52:23 | INFO  | It takes a moment until task 34e9f8bc-9d51-4fc0-a3ee-9673fbe91851 (wait-for-connection) has been started and output is visible here. 2025-09-19 06:52:39.107230 | orchestrator | 2025-09-19 06:52:39.107348 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-19 06:52:39.107365 | orchestrator | 2025-09-19 06:52:39.107379 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-19 06:52:39.107393 | orchestrator | Friday 19 September 2025 06:52:27 +0000 (0:00:00.243) 0:00:00.243 ****** 2025-09-19 06:52:39.107435 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:52:39.107450 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:52:39.107463 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:52:39.107475 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:52:39.107488 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:52:39.107501 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:52:39.107513 | orchestrator | 2025-09-19 06:52:39.107585 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:52:39.107600 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:52:39.107615 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:52:39.107628 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:52:39.107642 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:52:39.107655 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:52:39.107668 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:52:39.107681 | orchestrator | 2025-09-19 06:52:39.107695 | orchestrator | 2025-09-19 06:52:39.107708 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:52:39.107721 | orchestrator | Friday 19 September 2025 06:52:38 +0000 (0:00:11.583) 0:00:11.827 ****** 2025-09-19 06:52:39.107735 | orchestrator | =============================================================================== 2025-09-19 06:52:39.107748 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2025-09-19 06:52:39.287779 | orchestrator | + osism apply hddtemp 2025-09-19 06:52:51.059458 | orchestrator | 2025-09-19 06:52:51 | INFO  | Task e3c5bffc-132a-46a7-8730-07ca952c6aa8 (hddtemp) was prepared for execution. 2025-09-19 06:52:51.059627 | orchestrator | 2025-09-19 06:52:51 | INFO  | It takes a moment until task e3c5bffc-132a-46a7-8730-07ca952c6aa8 (hddtemp) has been started and output is visible here. 2025-09-19 06:53:17.454299 | orchestrator | 2025-09-19 06:53:17.454416 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-19 06:53:17.454432 | orchestrator | 2025-09-19 06:53:17.454445 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-19 06:53:17.454473 | orchestrator | Friday 19 September 2025 06:52:54 +0000 (0:00:00.250) 0:00:00.250 ****** 2025-09-19 06:53:17.454485 | orchestrator | ok: [testbed-manager] 2025-09-19 06:53:17.454498 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:53:17.454509 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:53:17.454520 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:53:17.454594 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:53:17.454611 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:53:17.454632 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:53:17.454652 | orchestrator | 2025-09-19 06:53:17.454671 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-19 06:53:17.454704 | orchestrator | Friday 19 September 2025 06:52:55 +0000 (0:00:00.630) 0:00:00.881 ****** 2025-09-19 06:53:17.454726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:53:17.454748 | orchestrator | 2025-09-19 06:53:17.454769 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-19 06:53:17.454789 | orchestrator | Friday 19 September 2025 06:52:56 +0000 (0:00:01.044) 0:00:01.925 ****** 2025-09-19 06:53:17.454829 | orchestrator | ok: [testbed-manager] 2025-09-19 06:53:17.454840 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:53:17.454851 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:53:17.454864 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:53:17.454883 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:53:17.454902 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:53:17.454936 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:53:17.454949 | orchestrator | 2025-09-19 06:53:17.454960 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-19 06:53:17.454971 | orchestrator | Friday 19 September 2025 06:52:58 +0000 (0:00:01.939) 0:00:03.864 ****** 2025-09-19 06:53:17.454985 | orchestrator | changed: [testbed-manager] 2025-09-19 06:53:17.455005 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:53:17.455024 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:53:17.455044 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:53:17.455063 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:53:17.455076 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:53:17.455086 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:53:17.455097 | orchestrator | 2025-09-19 06:53:17.455108 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-19 06:53:17.455119 | orchestrator | Friday 19 September 2025 06:52:59 +0000 (0:00:00.997) 0:00:04.862 ****** 2025-09-19 06:53:17.455130 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:53:17.455140 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:53:17.455151 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:53:17.455162 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:53:17.455173 | orchestrator | ok: [testbed-manager] 2025-09-19 06:53:17.455183 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:53:17.455194 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:53:17.455205 | orchestrator | 2025-09-19 06:53:17.455215 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-19 06:53:17.455226 | orchestrator | Friday 19 September 2025 06:53:00 +0000 (0:00:01.101) 0:00:05.963 ****** 2025-09-19 06:53:17.455237 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:53:17.455248 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:53:17.455259 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:53:17.455270 | orchestrator | changed: [testbed-manager] 2025-09-19 06:53:17.455280 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:53:17.455291 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:53:17.455302 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:53:17.455312 | orchestrator | 2025-09-19 06:53:17.455323 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-19 06:53:17.455334 | orchestrator | Friday 19 September 2025 06:53:01 +0000 (0:00:00.670) 0:00:06.633 ****** 2025-09-19 06:53:17.455345 | orchestrator | changed: [testbed-manager] 2025-09-19 06:53:17.455355 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:53:17.455366 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:53:17.455377 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:53:17.455387 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:53:17.455398 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:53:17.455408 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:53:17.455423 | orchestrator | 2025-09-19 06:53:17.455442 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-19 06:53:17.455462 | orchestrator | Friday 19 September 2025 06:53:14 +0000 (0:00:12.917) 0:00:19.551 ****** 2025-09-19 06:53:17.455483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:53:17.455503 | orchestrator | 2025-09-19 06:53:17.455545 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-19 06:53:17.455565 | orchestrator | Friday 19 September 2025 06:53:15 +0000 (0:00:01.401) 0:00:20.952 ****** 2025-09-19 06:53:17.455585 | orchestrator | changed: [testbed-manager] 2025-09-19 06:53:17.455618 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:53:17.455637 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:53:17.455656 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:53:17.455677 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:53:17.455688 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:53:17.455699 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:53:17.455710 | orchestrator | 2025-09-19 06:53:17.455721 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:53:17.455732 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:53:17.455765 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:53:17.455786 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:53:17.455797 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:53:17.455808 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:53:17.455819 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:53:17.455832 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:53:17.455851 | orchestrator | 2025-09-19 06:53:17.455871 | orchestrator | 2025-09-19 06:53:17.455889 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:53:17.455907 | orchestrator | Friday 19 September 2025 06:53:17 +0000 (0:00:01.763) 0:00:22.716 ****** 2025-09-19 06:53:17.455926 | orchestrator | =============================================================================== 2025-09-19 06:53:17.455945 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.92s 2025-09-19 06:53:17.455964 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2025-09-19 06:53:17.455979 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.76s 2025-09-19 06:53:17.455990 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.40s 2025-09-19 06:53:17.456001 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.10s 2025-09-19 06:53:17.456012 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.04s 2025-09-19 06:53:17.456023 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.00s 2025-09-19 06:53:17.456034 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.67s 2025-09-19 06:53:17.456044 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.63s 2025-09-19 06:53:17.626759 | orchestrator | ++ semver 9.2.0 7.1.1 2025-09-19 06:53:17.681924 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 06:53:17.682003 | orchestrator | + sudo systemctl restart manager.service 2025-09-19 06:53:35.262275 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 06:53:35.262389 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 06:53:35.262406 | orchestrator | + local max_attempts=60 2025-09-19 06:53:35.262419 | orchestrator | + local name=ceph-ansible 2025-09-19 06:53:35.262430 | orchestrator | + local attempt_num=1 2025-09-19 06:53:35.262442 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:53:35.297508 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:53:35.297636 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:53:35.297650 | orchestrator | + sleep 5 2025-09-19 06:53:40.304468 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:53:40.330396 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:53:40.330487 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:53:40.330498 | orchestrator | + sleep 5 2025-09-19 06:53:45.333355 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:53:45.368238 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:53:45.368366 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:53:45.368384 | orchestrator | + sleep 5 2025-09-19 06:53:50.371975 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:53:50.415317 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:53:50.415418 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:53:50.415434 | orchestrator | + sleep 5 2025-09-19 06:53:55.419834 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:53:55.457560 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:53:55.457668 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:53:55.457683 | orchestrator | + sleep 5 2025-09-19 06:54:00.461366 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:00.501033 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:00.501107 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:00.501115 | orchestrator | + sleep 5 2025-09-19 06:54:05.505720 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:05.547456 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:05.547631 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:05.547649 | orchestrator | + sleep 5 2025-09-19 06:54:10.556342 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:10.584197 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:10.584290 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:10.584305 | orchestrator | + sleep 5 2025-09-19 06:54:15.587064 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:15.621033 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:15.621127 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:15.621142 | orchestrator | + sleep 5 2025-09-19 06:54:20.623922 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:20.658892 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:20.658976 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:20.658991 | orchestrator | + sleep 5 2025-09-19 06:54:25.663328 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:25.697641 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:25.697729 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:25.697742 | orchestrator | + sleep 5 2025-09-19 06:54:30.704237 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:30.744245 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:30.744342 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:30.744357 | orchestrator | + sleep 5 2025-09-19 06:54:35.748978 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:35.789283 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:35.789380 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:35.789394 | orchestrator | + sleep 5 2025-09-19 06:54:40.794657 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:40.839952 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:40.840052 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 06:54:40.840083 | orchestrator | + local max_attempts=60 2025-09-19 06:54:40.840096 | orchestrator | + local name=kolla-ansible 2025-09-19 06:54:40.840107 | orchestrator | + local attempt_num=1 2025-09-19 06:54:40.840442 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 06:54:40.875186 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:40.875285 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 06:54:40.875331 | orchestrator | + local max_attempts=60 2025-09-19 06:54:40.875364 | orchestrator | + local name=osism-ansible 2025-09-19 06:54:40.875376 | orchestrator | + local attempt_num=1 2025-09-19 06:54:40.876101 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 06:54:40.915245 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:40.915370 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 06:54:40.915385 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 06:54:41.106404 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-19 06:54:41.257504 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-19 06:54:41.434279 | orchestrator | ARA in osism-ansible already disabled. 2025-09-19 06:54:41.616897 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-19 06:54:41.617318 | orchestrator | + osism apply gather-facts 2025-09-19 06:54:53.714736 | orchestrator | 2025-09-19 06:54:53 | INFO  | Task 83f0e1cb-5e98-43cb-94f1-6943f68c9704 (gather-facts) was prepared for execution. 2025-09-19 06:54:53.714826 | orchestrator | 2025-09-19 06:54:53 | INFO  | It takes a moment until task 83f0e1cb-5e98-43cb-94f1-6943f68c9704 (gather-facts) has been started and output is visible here. 2025-09-19 06:55:06.186302 | orchestrator | 2025-09-19 06:55:06.186416 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 06:55:06.186432 | orchestrator | 2025-09-19 06:55:06.186444 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:55:06.186456 | orchestrator | Friday 19 September 2025 06:54:57 +0000 (0:00:00.201) 0:00:00.201 ****** 2025-09-19 06:55:06.186467 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:55:06.186480 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:55:06.186492 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:55:06.186503 | orchestrator | ok: [testbed-manager] 2025-09-19 06:55:06.186514 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:55:06.186596 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:55:06.186608 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:55:06.186620 | orchestrator | 2025-09-19 06:55:06.186631 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 06:55:06.186642 | orchestrator | 2025-09-19 06:55:06.186654 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 06:55:06.186665 | orchestrator | Friday 19 September 2025 06:55:05 +0000 (0:00:08.144) 0:00:08.346 ****** 2025-09-19 06:55:06.186676 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:55:06.186688 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:55:06.186699 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:55:06.186711 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:55:06.186722 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:55:06.186733 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:55:06.186744 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:55:06.186755 | orchestrator | 2025-09-19 06:55:06.186767 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:55:06.186778 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:55:06.186796 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:55:06.186816 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:55:06.186834 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:55:06.186854 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:55:06.186874 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:55:06.186893 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:55:06.186912 | orchestrator | 2025-09-19 06:55:06.186928 | orchestrator | 2025-09-19 06:55:06.186940 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:55:06.186979 | orchestrator | Friday 19 September 2025 06:55:05 +0000 (0:00:00.458) 0:00:08.804 ****** 2025-09-19 06:55:06.186990 | orchestrator | =============================================================================== 2025-09-19 06:55:06.187001 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.14s 2025-09-19 06:55:06.187012 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-09-19 06:55:06.367760 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-19 06:55:06.388312 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-19 06:55:06.405462 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-19 06:55:06.422285 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-19 06:55:06.439574 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-19 06:55:06.460745 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-19 06:55:06.478343 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-19 06:55:06.502317 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-19 06:55:06.522331 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-19 06:55:06.534339 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-19 06:55:06.543755 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-19 06:55:06.553895 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-19 06:55:06.563805 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-19 06:55:06.578234 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-19 06:55:06.590825 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-19 06:55:06.606326 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-19 06:55:06.618575 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-19 06:55:06.629657 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-19 06:55:06.639871 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-19 06:55:06.650685 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-19 06:55:06.662281 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-19 06:55:06.894126 | orchestrator | ok: Runtime: 0:22:44.611029 2025-09-19 06:55:06.991292 | 2025-09-19 06:55:06.991425 | TASK [Deploy services] 2025-09-19 06:55:07.533492 | orchestrator | skipping: Conditional result was False 2025-09-19 06:55:07.551144 | 2025-09-19 06:55:07.551298 | TASK [Deploy in a nutshell] 2025-09-19 06:55:08.244993 | orchestrator | 2025-09-19 06:55:08.245148 | orchestrator | # PULL IMAGES 2025-09-19 06:55:08.245174 | orchestrator | 2025-09-19 06:55:08.245188 | orchestrator | + set -e 2025-09-19 06:55:08.245206 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 06:55:08.245226 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 06:55:08.245240 | orchestrator | ++ INTERACTIVE=false 2025-09-19 06:55:08.245284 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 06:55:08.245306 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 06:55:08.245320 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 06:55:08.245331 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 06:55:08.245349 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 06:55:08.245361 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 06:55:08.245379 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 06:55:08.245391 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 06:55:08.245408 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 06:55:08.245419 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 06:55:08.245433 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 06:55:08.245446 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 06:55:08.245458 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 06:55:08.245469 | orchestrator | ++ export ARA=false 2025-09-19 06:55:08.245481 | orchestrator | ++ ARA=false 2025-09-19 06:55:08.245492 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 06:55:08.245503 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 06:55:08.245514 | orchestrator | ++ export TEMPEST=false 2025-09-19 06:55:08.245579 | orchestrator | ++ TEMPEST=false 2025-09-19 06:55:08.245591 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 06:55:08.245603 | orchestrator | ++ IS_ZUUL=true 2025-09-19 06:55:08.245614 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 06:55:08.245626 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 06:55:08.245637 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 06:55:08.245648 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 06:55:08.245659 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 06:55:08.245671 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 06:55:08.245682 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 06:55:08.245693 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 06:55:08.245718 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 06:55:08.245735 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 06:55:08.245747 | orchestrator | + echo 2025-09-19 06:55:08.245758 | orchestrator | + echo '# PULL IMAGES' 2025-09-19 06:55:08.245770 | orchestrator | + echo 2025-09-19 06:55:08.245794 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-19 06:55:08.307261 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 06:55:08.307377 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-19 06:55:10.144338 | orchestrator | 2025-09-19 06:55:10 | INFO  | Trying to run play pull-images in environment custom 2025-09-19 06:55:20.269664 | orchestrator | 2025-09-19 06:55:20 | INFO  | Task 28436640-e58c-44da-a48d-58ab57176f3e (pull-images) was prepared for execution. 2025-09-19 06:55:20.269794 | orchestrator | 2025-09-19 06:55:20 | INFO  | Task 28436640-e58c-44da-a48d-58ab57176f3e is running in background. No more output. Check ARA for logs. 2025-09-19 06:55:22.280143 | orchestrator | 2025-09-19 06:55:22 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-19 06:55:32.517957 | orchestrator | 2025-09-19 06:55:32 | INFO  | Task 7293dc07-f198-44d1-8d1d-585da0e8daa2 (wipe-partitions) was prepared for execution. 2025-09-19 06:55:32.518121 | orchestrator | 2025-09-19 06:55:32 | INFO  | It takes a moment until task 7293dc07-f198-44d1-8d1d-585da0e8daa2 (wipe-partitions) has been started and output is visible here. 2025-09-19 06:55:44.734988 | orchestrator | 2025-09-19 06:55:44.735108 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-19 06:55:44.735126 | orchestrator | 2025-09-19 06:55:44.735138 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-19 06:55:44.735155 | orchestrator | Friday 19 September 2025 06:55:36 +0000 (0:00:00.134) 0:00:00.134 ****** 2025-09-19 06:55:44.735167 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:55:44.735179 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:55:44.735191 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:55:44.735203 | orchestrator | 2025-09-19 06:55:44.735215 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-19 06:55:44.735251 | orchestrator | Friday 19 September 2025 06:55:37 +0000 (0:00:00.573) 0:00:00.707 ****** 2025-09-19 06:55:44.735263 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:55:44.735274 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:55:44.735285 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:55:44.735300 | orchestrator | 2025-09-19 06:55:44.735312 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-19 06:55:44.735323 | orchestrator | Friday 19 September 2025 06:55:37 +0000 (0:00:00.256) 0:00:00.964 ****** 2025-09-19 06:55:44.735335 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:55:44.735347 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:55:44.735358 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:55:44.735370 | orchestrator | 2025-09-19 06:55:44.735381 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-19 06:55:44.735392 | orchestrator | Friday 19 September 2025 06:55:38 +0000 (0:00:00.741) 0:00:01.705 ****** 2025-09-19 06:55:44.735404 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:55:44.735415 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:55:44.735426 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:55:44.735438 | orchestrator | 2025-09-19 06:55:44.735449 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-19 06:55:44.735460 | orchestrator | Friday 19 September 2025 06:55:38 +0000 (0:00:00.270) 0:00:01.976 ****** 2025-09-19 06:55:44.735472 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 06:55:44.735487 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 06:55:44.735499 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 06:55:44.735510 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 06:55:44.735589 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 06:55:44.735601 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 06:55:44.735612 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 06:55:44.735623 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 06:55:44.735634 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 06:55:44.735645 | orchestrator | 2025-09-19 06:55:44.735657 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-19 06:55:44.735668 | orchestrator | Friday 19 September 2025 06:55:39 +0000 (0:00:01.173) 0:00:03.149 ****** 2025-09-19 06:55:44.735680 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 06:55:44.735691 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 06:55:44.735702 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 06:55:44.735713 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 06:55:44.735724 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 06:55:44.735735 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 06:55:44.735747 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 06:55:44.735758 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 06:55:44.735769 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 06:55:44.735780 | orchestrator | 2025-09-19 06:55:44.735792 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-19 06:55:44.735803 | orchestrator | Friday 19 September 2025 06:55:40 +0000 (0:00:01.328) 0:00:04.478 ****** 2025-09-19 06:55:44.735814 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 06:55:44.735825 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 06:55:44.735836 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 06:55:44.735848 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 06:55:44.735859 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 06:55:44.735870 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 06:55:44.735881 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 06:55:44.735892 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 06:55:44.735919 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 06:55:44.735932 | orchestrator | 2025-09-19 06:55:44.735943 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-19 06:55:44.735954 | orchestrator | Friday 19 September 2025 06:55:43 +0000 (0:00:02.191) 0:00:06.669 ****** 2025-09-19 06:55:44.735965 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:55:44.735976 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:55:44.735988 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:55:44.735999 | orchestrator | 2025-09-19 06:55:44.736010 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-19 06:55:44.736021 | orchestrator | Friday 19 September 2025 06:55:43 +0000 (0:00:00.640) 0:00:07.310 ****** 2025-09-19 06:55:44.736032 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:55:44.736043 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:55:44.736055 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:55:44.736066 | orchestrator | 2025-09-19 06:55:44.736077 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:55:44.736089 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:55:44.736103 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:55:44.736133 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:55:44.736145 | orchestrator | 2025-09-19 06:55:44.736156 | orchestrator | 2025-09-19 06:55:44.736167 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:55:44.736178 | orchestrator | Friday 19 September 2025 06:55:44 +0000 (0:00:00.596) 0:00:07.907 ****** 2025-09-19 06:55:44.736190 | orchestrator | =============================================================================== 2025-09-19 06:55:44.736201 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.19s 2025-09-19 06:55:44.736212 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-09-19 06:55:44.736223 | orchestrator | Check device availability ----------------------------------------------- 1.17s 2025-09-19 06:55:44.736234 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.74s 2025-09-19 06:55:44.736245 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2025-09-19 06:55:44.736256 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-09-19 06:55:44.736267 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-09-19 06:55:44.736278 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-09-19 06:55:44.736289 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-09-19 06:55:56.925173 | orchestrator | 2025-09-19 06:55:56 | INFO  | Task 49cfe1c9-2709-4db1-a2b3-53371e49397f (facts) was prepared for execution. 2025-09-19 06:55:56.925286 | orchestrator | 2025-09-19 06:55:56 | INFO  | It takes a moment until task 49cfe1c9-2709-4db1-a2b3-53371e49397f (facts) has been started and output is visible here. 2025-09-19 06:56:08.410957 | orchestrator | 2025-09-19 06:56:08.411114 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 06:56:08.411126 | orchestrator | 2025-09-19 06:56:08.411131 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 06:56:08.411136 | orchestrator | Friday 19 September 2025 06:56:00 +0000 (0:00:00.253) 0:00:00.253 ****** 2025-09-19 06:56:08.411141 | orchestrator | ok: [testbed-manager] 2025-09-19 06:56:08.411146 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:56:08.411150 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:56:08.411154 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:56:08.411176 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:56:08.411180 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:56:08.411184 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:56:08.411188 | orchestrator | 2025-09-19 06:56:08.411192 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 06:56:08.411196 | orchestrator | Friday 19 September 2025 06:56:01 +0000 (0:00:01.011) 0:00:01.265 ****** 2025-09-19 06:56:08.411201 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:56:08.411206 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:56:08.411210 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:56:08.411214 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:56:08.411218 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:08.411222 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:08.411225 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:08.411229 | orchestrator | 2025-09-19 06:56:08.411233 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 06:56:08.411237 | orchestrator | 2025-09-19 06:56:08.411251 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:56:08.411255 | orchestrator | Friday 19 September 2025 06:56:02 +0000 (0:00:01.123) 0:00:02.388 ****** 2025-09-19 06:56:08.411259 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:56:08.411263 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:56:08.411267 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:56:08.411272 | orchestrator | ok: [testbed-manager] 2025-09-19 06:56:08.411275 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:56:08.411279 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:56:08.411283 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:56:08.411287 | orchestrator | 2025-09-19 06:56:08.411291 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 06:56:08.411295 | orchestrator | 2025-09-19 06:56:08.411299 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 06:56:08.411303 | orchestrator | Friday 19 September 2025 06:56:07 +0000 (0:00:04.722) 0:00:07.111 ****** 2025-09-19 06:56:08.411307 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:56:08.411311 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:56:08.411315 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:56:08.411319 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:56:08.411322 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:08.411326 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:08.411330 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:08.411334 | orchestrator | 2025-09-19 06:56:08.411338 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:56:08.411342 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:56:08.411348 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:56:08.411352 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:56:08.411356 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:56:08.411360 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:56:08.411364 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:56:08.411368 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:56:08.411372 | orchestrator | 2025-09-19 06:56:08.411375 | orchestrator | 2025-09-19 06:56:08.411379 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:56:08.411392 | orchestrator | Friday 19 September 2025 06:56:08 +0000 (0:00:00.578) 0:00:07.689 ****** 2025-09-19 06:56:08.411396 | orchestrator | =============================================================================== 2025-09-19 06:56:08.411400 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.72s 2025-09-19 06:56:08.411404 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.12s 2025-09-19 06:56:08.411408 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2025-09-19 06:56:08.411412 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-09-19 06:56:10.702321 | orchestrator | 2025-09-19 06:56:10 | INFO  | Task 0336afc7-4014-487e-a174-5ca77089b1df (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-19 06:56:10.702396 | orchestrator | 2025-09-19 06:56:10 | INFO  | It takes a moment until task 0336afc7-4014-487e-a174-5ca77089b1df (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-19 06:56:22.539563 | orchestrator | 2025-09-19 06:56:22.539674 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 06:56:22.539691 | orchestrator | 2025-09-19 06:56:22.539703 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:56:22.539715 | orchestrator | Friday 19 September 2025 06:56:14 +0000 (0:00:00.325) 0:00:00.325 ****** 2025-09-19 06:56:22.539727 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 06:56:22.539739 | orchestrator | 2025-09-19 06:56:22.539750 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:56:22.539761 | orchestrator | Friday 19 September 2025 06:56:15 +0000 (0:00:00.230) 0:00:00.556 ****** 2025-09-19 06:56:22.539773 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:56:22.539785 | orchestrator | 2025-09-19 06:56:22.539796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.539807 | orchestrator | Friday 19 September 2025 06:56:15 +0000 (0:00:00.220) 0:00:00.777 ****** 2025-09-19 06:56:22.539818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 06:56:22.539830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 06:56:22.539852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 06:56:22.539864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 06:56:22.539876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 06:56:22.539887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 06:56:22.539898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 06:56:22.539909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 06:56:22.539920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 06:56:22.539931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 06:56:22.539942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 06:56:22.539953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 06:56:22.539964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 06:56:22.539975 | orchestrator | 2025-09-19 06:56:22.539986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.539997 | orchestrator | Friday 19 September 2025 06:56:15 +0000 (0:00:00.361) 0:00:01.139 ****** 2025-09-19 06:56:22.540008 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.540020 | orchestrator | 2025-09-19 06:56:22.540051 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540065 | orchestrator | Friday 19 September 2025 06:56:16 +0000 (0:00:00.475) 0:00:01.614 ****** 2025-09-19 06:56:22.540077 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.540090 | orchestrator | 2025-09-19 06:56:22.540102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540115 | orchestrator | Friday 19 September 2025 06:56:16 +0000 (0:00:00.212) 0:00:01.826 ****** 2025-09-19 06:56:22.540127 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.540141 | orchestrator | 2025-09-19 06:56:22.540154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540167 | orchestrator | Friday 19 September 2025 06:56:16 +0000 (0:00:00.202) 0:00:02.029 ****** 2025-09-19 06:56:22.540179 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.540192 | orchestrator | 2025-09-19 06:56:22.540209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540222 | orchestrator | Friday 19 September 2025 06:56:16 +0000 (0:00:00.217) 0:00:02.246 ****** 2025-09-19 06:56:22.540235 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.540249 | orchestrator | 2025-09-19 06:56:22.540261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540272 | orchestrator | Friday 19 September 2025 06:56:17 +0000 (0:00:00.203) 0:00:02.450 ****** 2025-09-19 06:56:22.540283 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.540294 | orchestrator | 2025-09-19 06:56:22.540306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540317 | orchestrator | Friday 19 September 2025 06:56:17 +0000 (0:00:00.193) 0:00:02.643 ****** 2025-09-19 06:56:22.540328 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.540338 | orchestrator | 2025-09-19 06:56:22.540349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540360 | orchestrator | Friday 19 September 2025 06:56:17 +0000 (0:00:00.242) 0:00:02.886 ****** 2025-09-19 06:56:22.540371 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.540382 | orchestrator | 2025-09-19 06:56:22.540393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540404 | orchestrator | Friday 19 September 2025 06:56:17 +0000 (0:00:00.228) 0:00:03.114 ****** 2025-09-19 06:56:22.540415 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a) 2025-09-19 06:56:22.540427 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a) 2025-09-19 06:56:22.540438 | orchestrator | 2025-09-19 06:56:22.540449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540460 | orchestrator | Friday 19 September 2025 06:56:18 +0000 (0:00:00.409) 0:00:03.524 ****** 2025-09-19 06:56:22.540487 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c) 2025-09-19 06:56:22.540499 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c) 2025-09-19 06:56:22.540532 | orchestrator | 2025-09-19 06:56:22.540544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540561 | orchestrator | Friday 19 September 2025 06:56:18 +0000 (0:00:00.431) 0:00:03.956 ****** 2025-09-19 06:56:22.540572 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8) 2025-09-19 06:56:22.540583 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8) 2025-09-19 06:56:22.540594 | orchestrator | 2025-09-19 06:56:22.540605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540616 | orchestrator | Friday 19 September 2025 06:56:19 +0000 (0:00:00.639) 0:00:04.596 ****** 2025-09-19 06:56:22.540627 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301) 2025-09-19 06:56:22.540647 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301) 2025-09-19 06:56:22.540658 | orchestrator | 2025-09-19 06:56:22.540669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:22.540680 | orchestrator | Friday 19 September 2025 06:56:19 +0000 (0:00:00.608) 0:00:05.204 ****** 2025-09-19 06:56:22.540691 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:56:22.540702 | orchestrator | 2025-09-19 06:56:22.540713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:22.540724 | orchestrator | Friday 19 September 2025 06:56:20 +0000 (0:00:00.740) 0:00:05.945 ****** 2025-09-19 06:56:22.540735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 06:56:22.540745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 06:56:22.540756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 06:56:22.540767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 06:56:22.540778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 06:56:22.540790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 06:56:22.540800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 06:56:22.540811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 06:56:22.540822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 06:56:22.540833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 06:56:22.540845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 06:56:22.540855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 06:56:22.540866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 06:56:22.540877 | orchestrator | 2025-09-19 06:56:22.540888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:22.540899 | orchestrator | Friday 19 September 2025 06:56:20 +0000 (0:00:00.370) 0:00:06.315 ****** 2025-09-19 06:56:22.540910 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.540921 | orchestrator | 2025-09-19 06:56:22.540932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:22.540943 | orchestrator | Friday 19 September 2025 06:56:21 +0000 (0:00:00.214) 0:00:06.529 ****** 2025-09-19 06:56:22.540954 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.540965 | orchestrator | 2025-09-19 06:56:22.540976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:22.540987 | orchestrator | Friday 19 September 2025 06:56:21 +0000 (0:00:00.190) 0:00:06.720 ****** 2025-09-19 06:56:22.540998 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.541009 | orchestrator | 2025-09-19 06:56:22.541020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:22.541031 | orchestrator | Friday 19 September 2025 06:56:21 +0000 (0:00:00.198) 0:00:06.918 ****** 2025-09-19 06:56:22.541042 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.541053 | orchestrator | 2025-09-19 06:56:22.541064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:22.541075 | orchestrator | Friday 19 September 2025 06:56:21 +0000 (0:00:00.217) 0:00:07.135 ****** 2025-09-19 06:56:22.541086 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.541097 | orchestrator | 2025-09-19 06:56:22.541108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:22.541126 | orchestrator | Friday 19 September 2025 06:56:21 +0000 (0:00:00.184) 0:00:07.320 ****** 2025-09-19 06:56:22.541136 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.541147 | orchestrator | 2025-09-19 06:56:22.541158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:22.541169 | orchestrator | Friday 19 September 2025 06:56:22 +0000 (0:00:00.188) 0:00:07.509 ****** 2025-09-19 06:56:22.541180 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:22.541191 | orchestrator | 2025-09-19 06:56:22.541202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:22.541214 | orchestrator | Friday 19 September 2025 06:56:22 +0000 (0:00:00.190) 0:00:07.699 ****** 2025-09-19 06:56:22.541231 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.168049 | orchestrator | 2025-09-19 06:56:30.168156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:30.168174 | orchestrator | Friday 19 September 2025 06:56:22 +0000 (0:00:00.212) 0:00:07.912 ****** 2025-09-19 06:56:30.168186 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 06:56:30.168205 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 06:56:30.168224 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 06:56:30.168252 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 06:56:30.168276 | orchestrator | 2025-09-19 06:56:30.168297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:30.168337 | orchestrator | Friday 19 September 2025 06:56:23 +0000 (0:00:01.045) 0:00:08.957 ****** 2025-09-19 06:56:30.168350 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.168362 | orchestrator | 2025-09-19 06:56:30.168373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:30.168384 | orchestrator | Friday 19 September 2025 06:56:23 +0000 (0:00:00.207) 0:00:09.164 ****** 2025-09-19 06:56:30.168396 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.168408 | orchestrator | 2025-09-19 06:56:30.168419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:30.168430 | orchestrator | Friday 19 September 2025 06:56:23 +0000 (0:00:00.212) 0:00:09.377 ****** 2025-09-19 06:56:30.168441 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.168452 | orchestrator | 2025-09-19 06:56:30.168464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:30.168475 | orchestrator | Friday 19 September 2025 06:56:24 +0000 (0:00:00.205) 0:00:09.582 ****** 2025-09-19 06:56:30.168486 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.168497 | orchestrator | 2025-09-19 06:56:30.168563 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 06:56:30.168578 | orchestrator | Friday 19 September 2025 06:56:24 +0000 (0:00:00.188) 0:00:09.771 ****** 2025-09-19 06:56:30.168591 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-19 06:56:30.168605 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-19 06:56:30.168618 | orchestrator | 2025-09-19 06:56:30.168630 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 06:56:30.168643 | orchestrator | Friday 19 September 2025 06:56:24 +0000 (0:00:00.175) 0:00:09.946 ****** 2025-09-19 06:56:30.168656 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.168669 | orchestrator | 2025-09-19 06:56:30.168681 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 06:56:30.168694 | orchestrator | Friday 19 September 2025 06:56:24 +0000 (0:00:00.145) 0:00:10.092 ****** 2025-09-19 06:56:30.168706 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.168718 | orchestrator | 2025-09-19 06:56:30.168731 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 06:56:30.168744 | orchestrator | Friday 19 September 2025 06:56:24 +0000 (0:00:00.137) 0:00:10.230 ****** 2025-09-19 06:56:30.168756 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.168769 | orchestrator | 2025-09-19 06:56:30.168801 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 06:56:30.168814 | orchestrator | Friday 19 September 2025 06:56:24 +0000 (0:00:00.133) 0:00:10.364 ****** 2025-09-19 06:56:30.168827 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:56:30.168840 | orchestrator | 2025-09-19 06:56:30.168853 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 06:56:30.168866 | orchestrator | Friday 19 September 2025 06:56:25 +0000 (0:00:00.139) 0:00:10.504 ****** 2025-09-19 06:56:30.168879 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '787edb9c-1668-5795-8146-b6ac8c49142c'}}) 2025-09-19 06:56:30.168893 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af475f18-71a6-5278-b018-36a08189cb1c'}}) 2025-09-19 06:56:30.168914 | orchestrator | 2025-09-19 06:56:30.168933 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 06:56:30.168951 | orchestrator | Friday 19 September 2025 06:56:25 +0000 (0:00:00.164) 0:00:10.668 ****** 2025-09-19 06:56:30.168971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '787edb9c-1668-5795-8146-b6ac8c49142c'}})  2025-09-19 06:56:30.169000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af475f18-71a6-5278-b018-36a08189cb1c'}})  2025-09-19 06:56:30.169019 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.169038 | orchestrator | 2025-09-19 06:56:30.169061 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 06:56:30.169087 | orchestrator | Friday 19 September 2025 06:56:25 +0000 (0:00:00.150) 0:00:10.819 ****** 2025-09-19 06:56:30.169106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '787edb9c-1668-5795-8146-b6ac8c49142c'}})  2025-09-19 06:56:30.169125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af475f18-71a6-5278-b018-36a08189cb1c'}})  2025-09-19 06:56:30.169145 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.169163 | orchestrator | 2025-09-19 06:56:30.169178 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 06:56:30.169190 | orchestrator | Friday 19 September 2025 06:56:25 +0000 (0:00:00.150) 0:00:10.969 ****** 2025-09-19 06:56:30.169201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '787edb9c-1668-5795-8146-b6ac8c49142c'}})  2025-09-19 06:56:30.169212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af475f18-71a6-5278-b018-36a08189cb1c'}})  2025-09-19 06:56:30.169223 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.169234 | orchestrator | 2025-09-19 06:56:30.169265 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 06:56:30.169277 | orchestrator | Friday 19 September 2025 06:56:25 +0000 (0:00:00.365) 0:00:11.334 ****** 2025-09-19 06:56:30.169288 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:56:30.169299 | orchestrator | 2025-09-19 06:56:30.169310 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 06:56:30.169322 | orchestrator | Friday 19 September 2025 06:56:26 +0000 (0:00:00.137) 0:00:11.472 ****** 2025-09-19 06:56:30.169333 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:56:30.169344 | orchestrator | 2025-09-19 06:56:30.169355 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 06:56:30.169366 | orchestrator | Friday 19 September 2025 06:56:26 +0000 (0:00:00.147) 0:00:11.619 ****** 2025-09-19 06:56:30.169377 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.169389 | orchestrator | 2025-09-19 06:56:30.169400 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 06:56:30.169411 | orchestrator | Friday 19 September 2025 06:56:26 +0000 (0:00:00.142) 0:00:11.761 ****** 2025-09-19 06:56:30.169422 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.169433 | orchestrator | 2025-09-19 06:56:30.169444 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 06:56:30.169466 | orchestrator | Friday 19 September 2025 06:56:26 +0000 (0:00:00.133) 0:00:11.895 ****** 2025-09-19 06:56:30.169478 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.169489 | orchestrator | 2025-09-19 06:56:30.169529 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 06:56:30.169548 | orchestrator | Friday 19 September 2025 06:56:26 +0000 (0:00:00.138) 0:00:12.033 ****** 2025-09-19 06:56:30.169565 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:56:30.169585 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:56:30.169601 | orchestrator |  "sdb": { 2025-09-19 06:56:30.169618 | orchestrator |  "osd_lvm_uuid": "787edb9c-1668-5795-8146-b6ac8c49142c" 2025-09-19 06:56:30.169630 | orchestrator |  }, 2025-09-19 06:56:30.169641 | orchestrator |  "sdc": { 2025-09-19 06:56:30.169652 | orchestrator |  "osd_lvm_uuid": "af475f18-71a6-5278-b018-36a08189cb1c" 2025-09-19 06:56:30.169663 | orchestrator |  } 2025-09-19 06:56:30.169674 | orchestrator |  } 2025-09-19 06:56:30.169685 | orchestrator | } 2025-09-19 06:56:30.169697 | orchestrator | 2025-09-19 06:56:30.169709 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 06:56:30.169720 | orchestrator | Friday 19 September 2025 06:56:26 +0000 (0:00:00.149) 0:00:12.182 ****** 2025-09-19 06:56:30.169731 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.169742 | orchestrator | 2025-09-19 06:56:30.169753 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 06:56:30.169764 | orchestrator | Friday 19 September 2025 06:56:26 +0000 (0:00:00.146) 0:00:12.329 ****** 2025-09-19 06:56:30.169782 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.169794 | orchestrator | 2025-09-19 06:56:30.169805 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 06:56:30.169816 | orchestrator | Friday 19 September 2025 06:56:27 +0000 (0:00:00.146) 0:00:12.476 ****** 2025-09-19 06:56:30.169827 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:30.169838 | orchestrator | 2025-09-19 06:56:30.169849 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 06:56:30.169860 | orchestrator | Friday 19 September 2025 06:56:27 +0000 (0:00:00.150) 0:00:12.626 ****** 2025-09-19 06:56:30.169872 | orchestrator | changed: [testbed-node-3] => { 2025-09-19 06:56:30.169883 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 06:56:30.169894 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:56:30.169905 | orchestrator |  "sdb": { 2025-09-19 06:56:30.169916 | orchestrator |  "osd_lvm_uuid": "787edb9c-1668-5795-8146-b6ac8c49142c" 2025-09-19 06:56:30.169927 | orchestrator |  }, 2025-09-19 06:56:30.169938 | orchestrator |  "sdc": { 2025-09-19 06:56:30.169950 | orchestrator |  "osd_lvm_uuid": "af475f18-71a6-5278-b018-36a08189cb1c" 2025-09-19 06:56:30.169961 | orchestrator |  } 2025-09-19 06:56:30.169972 | orchestrator |  }, 2025-09-19 06:56:30.169983 | orchestrator |  "lvm_volumes": [ 2025-09-19 06:56:30.169994 | orchestrator |  { 2025-09-19 06:56:30.170005 | orchestrator |  "data": "osd-block-787edb9c-1668-5795-8146-b6ac8c49142c", 2025-09-19 06:56:30.170071 | orchestrator |  "data_vg": "ceph-787edb9c-1668-5795-8146-b6ac8c49142c" 2025-09-19 06:56:30.170084 | orchestrator |  }, 2025-09-19 06:56:30.170096 | orchestrator |  { 2025-09-19 06:56:30.170107 | orchestrator |  "data": "osd-block-af475f18-71a6-5278-b018-36a08189cb1c", 2025-09-19 06:56:30.170118 | orchestrator |  "data_vg": "ceph-af475f18-71a6-5278-b018-36a08189cb1c" 2025-09-19 06:56:30.170129 | orchestrator |  } 2025-09-19 06:56:30.170141 | orchestrator |  ] 2025-09-19 06:56:30.170152 | orchestrator |  } 2025-09-19 06:56:30.170163 | orchestrator | } 2025-09-19 06:56:30.170174 | orchestrator | 2025-09-19 06:56:30.170185 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 06:56:30.170197 | orchestrator | Friday 19 September 2025 06:56:27 +0000 (0:00:00.207) 0:00:12.833 ****** 2025-09-19 06:56:30.170218 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 06:56:30.170229 | orchestrator | 2025-09-19 06:56:30.170241 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 06:56:30.170252 | orchestrator | 2025-09-19 06:56:30.170263 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:56:30.170275 | orchestrator | Friday 19 September 2025 06:56:29 +0000 (0:00:02.207) 0:00:15.041 ****** 2025-09-19 06:56:30.170286 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 06:56:30.170297 | orchestrator | 2025-09-19 06:56:30.170308 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:56:30.170319 | orchestrator | Friday 19 September 2025 06:56:29 +0000 (0:00:00.251) 0:00:15.293 ****** 2025-09-19 06:56:30.170331 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:56:30.170342 | orchestrator | 2025-09-19 06:56:30.170353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:30.170373 | orchestrator | Friday 19 September 2025 06:56:30 +0000 (0:00:00.246) 0:00:15.539 ****** 2025-09-19 06:56:37.647170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 06:56:37.647288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 06:56:37.647312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 06:56:37.647332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 06:56:37.647352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 06:56:37.647372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 06:56:37.647391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 06:56:37.647403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 06:56:37.647414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 06:56:37.647426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 06:56:37.647457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 06:56:37.647469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 06:56:37.647480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 06:56:37.647491 | orchestrator | 2025-09-19 06:56:37.647569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.647583 | orchestrator | Friday 19 September 2025 06:56:30 +0000 (0:00:00.395) 0:00:15.935 ****** 2025-09-19 06:56:37.647595 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.647607 | orchestrator | 2025-09-19 06:56:37.647619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.647630 | orchestrator | Friday 19 September 2025 06:56:30 +0000 (0:00:00.185) 0:00:16.120 ****** 2025-09-19 06:56:37.647641 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.647652 | orchestrator | 2025-09-19 06:56:37.647663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.647674 | orchestrator | Friday 19 September 2025 06:56:30 +0000 (0:00:00.194) 0:00:16.314 ****** 2025-09-19 06:56:37.647686 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.647699 | orchestrator | 2025-09-19 06:56:37.647711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.647724 | orchestrator | Friday 19 September 2025 06:56:31 +0000 (0:00:00.203) 0:00:16.518 ****** 2025-09-19 06:56:37.647736 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.647749 | orchestrator | 2025-09-19 06:56:37.647782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.647796 | orchestrator | Friday 19 September 2025 06:56:31 +0000 (0:00:00.191) 0:00:16.710 ****** 2025-09-19 06:56:37.647808 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.647821 | orchestrator | 2025-09-19 06:56:37.647834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.647847 | orchestrator | Friday 19 September 2025 06:56:31 +0000 (0:00:00.196) 0:00:16.907 ****** 2025-09-19 06:56:37.647860 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.647873 | orchestrator | 2025-09-19 06:56:37.647885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.647898 | orchestrator | Friday 19 September 2025 06:56:32 +0000 (0:00:00.601) 0:00:17.508 ****** 2025-09-19 06:56:37.647910 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.647923 | orchestrator | 2025-09-19 06:56:37.647935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.647947 | orchestrator | Friday 19 September 2025 06:56:32 +0000 (0:00:00.209) 0:00:17.718 ****** 2025-09-19 06:56:37.647960 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.647972 | orchestrator | 2025-09-19 06:56:37.647984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.647997 | orchestrator | Friday 19 September 2025 06:56:32 +0000 (0:00:00.220) 0:00:17.938 ****** 2025-09-19 06:56:37.648010 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd) 2025-09-19 06:56:37.648024 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd) 2025-09-19 06:56:37.648035 | orchestrator | 2025-09-19 06:56:37.648046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.648057 | orchestrator | Friday 19 September 2025 06:56:33 +0000 (0:00:00.488) 0:00:18.427 ****** 2025-09-19 06:56:37.648068 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97) 2025-09-19 06:56:37.648079 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97) 2025-09-19 06:56:37.648090 | orchestrator | 2025-09-19 06:56:37.648101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.648112 | orchestrator | Friday 19 September 2025 06:56:33 +0000 (0:00:00.423) 0:00:18.850 ****** 2025-09-19 06:56:37.648123 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508) 2025-09-19 06:56:37.648137 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508) 2025-09-19 06:56:37.648156 | orchestrator | 2025-09-19 06:56:37.648174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.648193 | orchestrator | Friday 19 September 2025 06:56:33 +0000 (0:00:00.377) 0:00:19.228 ****** 2025-09-19 06:56:37.648232 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b) 2025-09-19 06:56:37.648253 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b) 2025-09-19 06:56:37.648273 | orchestrator | 2025-09-19 06:56:37.648292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:37.648303 | orchestrator | Friday 19 September 2025 06:56:34 +0000 (0:00:00.410) 0:00:19.638 ****** 2025-09-19 06:56:37.648314 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:56:37.648325 | orchestrator | 2025-09-19 06:56:37.648337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.648355 | orchestrator | Friday 19 September 2025 06:56:34 +0000 (0:00:00.344) 0:00:19.982 ****** 2025-09-19 06:56:37.648367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 06:56:37.648378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 06:56:37.648398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 06:56:37.648410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 06:56:37.648429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 06:56:37.648447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 06:56:37.648465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 06:56:37.648484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 06:56:37.648526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 06:56:37.648545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 06:56:37.648560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 06:56:37.648571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 06:56:37.648582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 06:56:37.648593 | orchestrator | 2025-09-19 06:56:37.648604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.648615 | orchestrator | Friday 19 September 2025 06:56:34 +0000 (0:00:00.373) 0:00:20.356 ****** 2025-09-19 06:56:37.648626 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.648637 | orchestrator | 2025-09-19 06:56:37.648648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.648659 | orchestrator | Friday 19 September 2025 06:56:35 +0000 (0:00:00.203) 0:00:20.559 ****** 2025-09-19 06:56:37.648670 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.648682 | orchestrator | 2025-09-19 06:56:37.648694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.648705 | orchestrator | Friday 19 September 2025 06:56:35 +0000 (0:00:00.494) 0:00:21.053 ****** 2025-09-19 06:56:37.648716 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.648727 | orchestrator | 2025-09-19 06:56:37.648738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.648749 | orchestrator | Friday 19 September 2025 06:56:35 +0000 (0:00:00.173) 0:00:21.226 ****** 2025-09-19 06:56:37.648760 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.648787 | orchestrator | 2025-09-19 06:56:37.648814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.648826 | orchestrator | Friday 19 September 2025 06:56:36 +0000 (0:00:00.171) 0:00:21.398 ****** 2025-09-19 06:56:37.648837 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.648848 | orchestrator | 2025-09-19 06:56:37.648859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.648870 | orchestrator | Friday 19 September 2025 06:56:36 +0000 (0:00:00.164) 0:00:21.562 ****** 2025-09-19 06:56:37.648881 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.648900 | orchestrator | 2025-09-19 06:56:37.648919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.648937 | orchestrator | Friday 19 September 2025 06:56:36 +0000 (0:00:00.186) 0:00:21.748 ****** 2025-09-19 06:56:37.648955 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.648974 | orchestrator | 2025-09-19 06:56:37.648993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.649013 | orchestrator | Friday 19 September 2025 06:56:36 +0000 (0:00:00.162) 0:00:21.911 ****** 2025-09-19 06:56:37.649025 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.649036 | orchestrator | 2025-09-19 06:56:37.649047 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.649066 | orchestrator | Friday 19 September 2025 06:56:36 +0000 (0:00:00.170) 0:00:22.081 ****** 2025-09-19 06:56:37.649078 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 06:56:37.649089 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 06:56:37.649101 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 06:56:37.649112 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 06:56:37.649123 | orchestrator | 2025-09-19 06:56:37.649134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:37.649145 | orchestrator | Friday 19 September 2025 06:56:37 +0000 (0:00:00.782) 0:00:22.864 ****** 2025-09-19 06:56:37.649156 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:37.649167 | orchestrator | 2025-09-19 06:56:37.649186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:43.324845 | orchestrator | Friday 19 September 2025 06:56:37 +0000 (0:00:00.157) 0:00:23.022 ****** 2025-09-19 06:56:43.324947 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.324961 | orchestrator | 2025-09-19 06:56:43.324973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:43.324983 | orchestrator | Friday 19 September 2025 06:56:37 +0000 (0:00:00.146) 0:00:23.168 ****** 2025-09-19 06:56:43.324993 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325004 | orchestrator | 2025-09-19 06:56:43.325014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:43.325024 | orchestrator | Friday 19 September 2025 06:56:38 +0000 (0:00:00.284) 0:00:23.453 ****** 2025-09-19 06:56:43.325034 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325044 | orchestrator | 2025-09-19 06:56:43.325071 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 06:56:43.325082 | orchestrator | Friday 19 September 2025 06:56:38 +0000 (0:00:00.182) 0:00:23.635 ****** 2025-09-19 06:56:43.325092 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-19 06:56:43.325101 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-19 06:56:43.325111 | orchestrator | 2025-09-19 06:56:43.325121 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 06:56:43.325131 | orchestrator | Friday 19 September 2025 06:56:38 +0000 (0:00:00.264) 0:00:23.900 ****** 2025-09-19 06:56:43.325141 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325151 | orchestrator | 2025-09-19 06:56:43.325161 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 06:56:43.325171 | orchestrator | Friday 19 September 2025 06:56:38 +0000 (0:00:00.161) 0:00:24.061 ****** 2025-09-19 06:56:43.325181 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325191 | orchestrator | 2025-09-19 06:56:43.325201 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 06:56:43.325211 | orchestrator | Friday 19 September 2025 06:56:38 +0000 (0:00:00.123) 0:00:24.185 ****** 2025-09-19 06:56:43.325221 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325231 | orchestrator | 2025-09-19 06:56:43.325241 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 06:56:43.325251 | orchestrator | Friday 19 September 2025 06:56:38 +0000 (0:00:00.119) 0:00:24.304 ****** 2025-09-19 06:56:43.325261 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:56:43.325271 | orchestrator | 2025-09-19 06:56:43.325281 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 06:56:43.325291 | orchestrator | Friday 19 September 2025 06:56:39 +0000 (0:00:00.134) 0:00:24.439 ****** 2025-09-19 06:56:43.325302 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5631a8c0-2403-5b6d-b4ab-3f734fe52f75'}}) 2025-09-19 06:56:43.325312 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '32fceb46-e08d-5445-84d6-a85b98e59ab0'}}) 2025-09-19 06:56:43.325322 | orchestrator | 2025-09-19 06:56:43.325332 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 06:56:43.325359 | orchestrator | Friday 19 September 2025 06:56:39 +0000 (0:00:00.144) 0:00:24.584 ****** 2025-09-19 06:56:43.325370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5631a8c0-2403-5b6d-b4ab-3f734fe52f75'}})  2025-09-19 06:56:43.325382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '32fceb46-e08d-5445-84d6-a85b98e59ab0'}})  2025-09-19 06:56:43.325394 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325405 | orchestrator | 2025-09-19 06:56:43.325417 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 06:56:43.325428 | orchestrator | Friday 19 September 2025 06:56:39 +0000 (0:00:00.150) 0:00:24.734 ****** 2025-09-19 06:56:43.325439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5631a8c0-2403-5b6d-b4ab-3f734fe52f75'}})  2025-09-19 06:56:43.325451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '32fceb46-e08d-5445-84d6-a85b98e59ab0'}})  2025-09-19 06:56:43.325462 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325473 | orchestrator | 2025-09-19 06:56:43.325484 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 06:56:43.325515 | orchestrator | Friday 19 September 2025 06:56:39 +0000 (0:00:00.183) 0:00:24.918 ****** 2025-09-19 06:56:43.325527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5631a8c0-2403-5b6d-b4ab-3f734fe52f75'}})  2025-09-19 06:56:43.325538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '32fceb46-e08d-5445-84d6-a85b98e59ab0'}})  2025-09-19 06:56:43.325550 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325562 | orchestrator | 2025-09-19 06:56:43.325573 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 06:56:43.325585 | orchestrator | Friday 19 September 2025 06:56:39 +0000 (0:00:00.150) 0:00:25.068 ****** 2025-09-19 06:56:43.325597 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:56:43.325608 | orchestrator | 2025-09-19 06:56:43.325619 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 06:56:43.325631 | orchestrator | Friday 19 September 2025 06:56:39 +0000 (0:00:00.100) 0:00:25.169 ****** 2025-09-19 06:56:43.325642 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:56:43.325653 | orchestrator | 2025-09-19 06:56:43.325664 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 06:56:43.325675 | orchestrator | Friday 19 September 2025 06:56:39 +0000 (0:00:00.109) 0:00:25.278 ****** 2025-09-19 06:56:43.325687 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325698 | orchestrator | 2025-09-19 06:56:43.325723 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 06:56:43.325735 | orchestrator | Friday 19 September 2025 06:56:39 +0000 (0:00:00.099) 0:00:25.378 ****** 2025-09-19 06:56:43.325747 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325758 | orchestrator | 2025-09-19 06:56:43.325768 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 06:56:43.325778 | orchestrator | Friday 19 September 2025 06:56:40 +0000 (0:00:00.241) 0:00:25.620 ****** 2025-09-19 06:56:43.325788 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325797 | orchestrator | 2025-09-19 06:56:43.325807 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 06:56:43.325817 | orchestrator | Friday 19 September 2025 06:56:40 +0000 (0:00:00.111) 0:00:25.731 ****** 2025-09-19 06:56:43.325827 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:56:43.325837 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:56:43.325847 | orchestrator |  "sdb": { 2025-09-19 06:56:43.325857 | orchestrator |  "osd_lvm_uuid": "5631a8c0-2403-5b6d-b4ab-3f734fe52f75" 2025-09-19 06:56:43.325867 | orchestrator |  }, 2025-09-19 06:56:43.325877 | orchestrator |  "sdc": { 2025-09-19 06:56:43.325887 | orchestrator |  "osd_lvm_uuid": "32fceb46-e08d-5445-84d6-a85b98e59ab0" 2025-09-19 06:56:43.325904 | orchestrator |  } 2025-09-19 06:56:43.325914 | orchestrator |  } 2025-09-19 06:56:43.325924 | orchestrator | } 2025-09-19 06:56:43.325934 | orchestrator | 2025-09-19 06:56:43.325944 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 06:56:43.325954 | orchestrator | Friday 19 September 2025 06:56:40 +0000 (0:00:00.139) 0:00:25.870 ****** 2025-09-19 06:56:43.325963 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.325973 | orchestrator | 2025-09-19 06:56:43.325989 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 06:56:43.325999 | orchestrator | Friday 19 September 2025 06:56:40 +0000 (0:00:00.123) 0:00:25.993 ****** 2025-09-19 06:56:43.326009 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.326072 | orchestrator | 2025-09-19 06:56:43.326082 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 06:56:43.326092 | orchestrator | Friday 19 September 2025 06:56:40 +0000 (0:00:00.130) 0:00:26.124 ****** 2025-09-19 06:56:43.326102 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:43.326112 | orchestrator | 2025-09-19 06:56:43.326122 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 06:56:43.326132 | orchestrator | Friday 19 September 2025 06:56:40 +0000 (0:00:00.099) 0:00:26.224 ****** 2025-09-19 06:56:43.326141 | orchestrator | changed: [testbed-node-4] => { 2025-09-19 06:56:43.326151 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 06:56:43.326161 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:56:43.326171 | orchestrator |  "sdb": { 2025-09-19 06:56:43.326181 | orchestrator |  "osd_lvm_uuid": "5631a8c0-2403-5b6d-b4ab-3f734fe52f75" 2025-09-19 06:56:43.326191 | orchestrator |  }, 2025-09-19 06:56:43.326205 | orchestrator |  "sdc": { 2025-09-19 06:56:43.326216 | orchestrator |  "osd_lvm_uuid": "32fceb46-e08d-5445-84d6-a85b98e59ab0" 2025-09-19 06:56:43.326225 | orchestrator |  } 2025-09-19 06:56:43.326235 | orchestrator |  }, 2025-09-19 06:56:43.326245 | orchestrator |  "lvm_volumes": [ 2025-09-19 06:56:43.326255 | orchestrator |  { 2025-09-19 06:56:43.326264 | orchestrator |  "data": "osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75", 2025-09-19 06:56:43.326275 | orchestrator |  "data_vg": "ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75" 2025-09-19 06:56:43.326284 | orchestrator |  }, 2025-09-19 06:56:43.326294 | orchestrator |  { 2025-09-19 06:56:43.326304 | orchestrator |  "data": "osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0", 2025-09-19 06:56:43.326314 | orchestrator |  "data_vg": "ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0" 2025-09-19 06:56:43.326324 | orchestrator |  } 2025-09-19 06:56:43.326333 | orchestrator |  ] 2025-09-19 06:56:43.326343 | orchestrator |  } 2025-09-19 06:56:43.326353 | orchestrator | } 2025-09-19 06:56:43.326363 | orchestrator | 2025-09-19 06:56:43.326373 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 06:56:43.326383 | orchestrator | Friday 19 September 2025 06:56:41 +0000 (0:00:00.161) 0:00:26.385 ****** 2025-09-19 06:56:43.326393 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 06:56:43.326403 | orchestrator | 2025-09-19 06:56:43.326412 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 06:56:43.326422 | orchestrator | 2025-09-19 06:56:43.326432 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:56:43.326442 | orchestrator | Friday 19 September 2025 06:56:42 +0000 (0:00:01.002) 0:00:27.388 ****** 2025-09-19 06:56:43.326452 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 06:56:43.326462 | orchestrator | 2025-09-19 06:56:43.326471 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:56:43.326481 | orchestrator | Friday 19 September 2025 06:56:42 +0000 (0:00:00.391) 0:00:27.780 ****** 2025-09-19 06:56:43.326491 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:56:43.326567 | orchestrator | 2025-09-19 06:56:43.326578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:43.326588 | orchestrator | Friday 19 September 2025 06:56:42 +0000 (0:00:00.550) 0:00:28.331 ****** 2025-09-19 06:56:43.326598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 06:56:43.326608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 06:56:43.326618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 06:56:43.326627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 06:56:43.326637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 06:56:43.326647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 06:56:43.326664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 06:56:51.564638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 06:56:51.564744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 06:56:51.564760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 06:56:51.564772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 06:56:51.564784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 06:56:51.564795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 06:56:51.564807 | orchestrator | 2025-09-19 06:56:51.564819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.564831 | orchestrator | Friday 19 September 2025 06:56:43 +0000 (0:00:00.365) 0:00:28.696 ****** 2025-09-19 06:56:51.564842 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.564854 | orchestrator | 2025-09-19 06:56:51.564866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.564877 | orchestrator | Friday 19 September 2025 06:56:43 +0000 (0:00:00.204) 0:00:28.901 ****** 2025-09-19 06:56:51.564888 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.564899 | orchestrator | 2025-09-19 06:56:51.564910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.564921 | orchestrator | Friday 19 September 2025 06:56:43 +0000 (0:00:00.217) 0:00:29.118 ****** 2025-09-19 06:56:51.564932 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.564943 | orchestrator | 2025-09-19 06:56:51.564954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.564966 | orchestrator | Friday 19 September 2025 06:56:43 +0000 (0:00:00.191) 0:00:29.309 ****** 2025-09-19 06:56:51.564977 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.564988 | orchestrator | 2025-09-19 06:56:51.564999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.565010 | orchestrator | Friday 19 September 2025 06:56:44 +0000 (0:00:00.223) 0:00:29.533 ****** 2025-09-19 06:56:51.565021 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.565032 | orchestrator | 2025-09-19 06:56:51.565043 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.565055 | orchestrator | Friday 19 September 2025 06:56:44 +0000 (0:00:00.216) 0:00:29.749 ****** 2025-09-19 06:56:51.565066 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.565077 | orchestrator | 2025-09-19 06:56:51.565088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.565101 | orchestrator | Friday 19 September 2025 06:56:44 +0000 (0:00:00.178) 0:00:29.928 ****** 2025-09-19 06:56:51.565114 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.565127 | orchestrator | 2025-09-19 06:56:51.565163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.565176 | orchestrator | Friday 19 September 2025 06:56:44 +0000 (0:00:00.261) 0:00:30.189 ****** 2025-09-19 06:56:51.565188 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.565201 | orchestrator | 2025-09-19 06:56:51.565229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.565243 | orchestrator | Friday 19 September 2025 06:56:45 +0000 (0:00:00.225) 0:00:30.415 ****** 2025-09-19 06:56:51.565255 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d) 2025-09-19 06:56:51.565269 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d) 2025-09-19 06:56:51.565282 | orchestrator | 2025-09-19 06:56:51.565295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.565309 | orchestrator | Friday 19 September 2025 06:56:45 +0000 (0:00:00.747) 0:00:31.162 ****** 2025-09-19 06:56:51.565321 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2) 2025-09-19 06:56:51.565334 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2) 2025-09-19 06:56:51.565347 | orchestrator | 2025-09-19 06:56:51.565359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.565372 | orchestrator | Friday 19 September 2025 06:56:46 +0000 (0:00:00.890) 0:00:32.053 ****** 2025-09-19 06:56:51.565384 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39) 2025-09-19 06:56:51.565397 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39) 2025-09-19 06:56:51.565410 | orchestrator | 2025-09-19 06:56:51.565423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.565436 | orchestrator | Friday 19 September 2025 06:56:47 +0000 (0:00:00.439) 0:00:32.492 ****** 2025-09-19 06:56:51.565449 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528) 2025-09-19 06:56:51.565460 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528) 2025-09-19 06:56:51.565471 | orchestrator | 2025-09-19 06:56:51.565481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:56:51.565512 | orchestrator | Friday 19 September 2025 06:56:47 +0000 (0:00:00.438) 0:00:32.931 ****** 2025-09-19 06:56:51.565524 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:56:51.565535 | orchestrator | 2025-09-19 06:56:51.565546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.565558 | orchestrator | Friday 19 September 2025 06:56:47 +0000 (0:00:00.361) 0:00:33.292 ****** 2025-09-19 06:56:51.565584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 06:56:51.565596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 06:56:51.565607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 06:56:51.565619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 06:56:51.565629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 06:56:51.565640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 06:56:51.565651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 06:56:51.565662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 06:56:51.565673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 06:56:51.565693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 06:56:51.565704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 06:56:51.565715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 06:56:51.565726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 06:56:51.565737 | orchestrator | 2025-09-19 06:56:51.565748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.565759 | orchestrator | Friday 19 September 2025 06:56:48 +0000 (0:00:00.538) 0:00:33.831 ****** 2025-09-19 06:56:51.565770 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.565781 | orchestrator | 2025-09-19 06:56:51.565792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.565802 | orchestrator | Friday 19 September 2025 06:56:48 +0000 (0:00:00.172) 0:00:34.004 ****** 2025-09-19 06:56:51.565813 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.565824 | orchestrator | 2025-09-19 06:56:51.565835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.565846 | orchestrator | Friday 19 September 2025 06:56:48 +0000 (0:00:00.168) 0:00:34.173 ****** 2025-09-19 06:56:51.565857 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.565868 | orchestrator | 2025-09-19 06:56:51.565879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.565890 | orchestrator | Friday 19 September 2025 06:56:49 +0000 (0:00:00.211) 0:00:34.384 ****** 2025-09-19 06:56:51.565901 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.565912 | orchestrator | 2025-09-19 06:56:51.565923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.565934 | orchestrator | Friday 19 September 2025 06:56:49 +0000 (0:00:00.174) 0:00:34.559 ****** 2025-09-19 06:56:51.565945 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.565956 | orchestrator | 2025-09-19 06:56:51.565967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.565978 | orchestrator | Friday 19 September 2025 06:56:49 +0000 (0:00:00.173) 0:00:34.733 ****** 2025-09-19 06:56:51.565988 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.565999 | orchestrator | 2025-09-19 06:56:51.566010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.566078 | orchestrator | Friday 19 September 2025 06:56:49 +0000 (0:00:00.464) 0:00:35.197 ****** 2025-09-19 06:56:51.566090 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.566100 | orchestrator | 2025-09-19 06:56:51.566111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.566123 | orchestrator | Friday 19 September 2025 06:56:50 +0000 (0:00:00.214) 0:00:35.412 ****** 2025-09-19 06:56:51.566134 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.566145 | orchestrator | 2025-09-19 06:56:51.566155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.566167 | orchestrator | Friday 19 September 2025 06:56:50 +0000 (0:00:00.168) 0:00:35.580 ****** 2025-09-19 06:56:51.566177 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 06:56:51.566188 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 06:56:51.566200 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 06:56:51.566211 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 06:56:51.566222 | orchestrator | 2025-09-19 06:56:51.566233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.566244 | orchestrator | Friday 19 September 2025 06:56:50 +0000 (0:00:00.604) 0:00:36.185 ****** 2025-09-19 06:56:51.566255 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.566266 | orchestrator | 2025-09-19 06:56:51.566277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.566288 | orchestrator | Friday 19 September 2025 06:56:50 +0000 (0:00:00.179) 0:00:36.364 ****** 2025-09-19 06:56:51.566306 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.566317 | orchestrator | 2025-09-19 06:56:51.566328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.566339 | orchestrator | Friday 19 September 2025 06:56:51 +0000 (0:00:00.202) 0:00:36.567 ****** 2025-09-19 06:56:51.566350 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.566361 | orchestrator | 2025-09-19 06:56:51.566372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:56:51.566383 | orchestrator | Friday 19 September 2025 06:56:51 +0000 (0:00:00.178) 0:00:36.746 ****** 2025-09-19 06:56:51.566400 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:51.566412 | orchestrator | 2025-09-19 06:56:51.566423 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 06:56:51.566440 | orchestrator | Friday 19 September 2025 06:56:51 +0000 (0:00:00.195) 0:00:36.941 ****** 2025-09-19 06:56:55.261988 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-19 06:56:55.262207 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-19 06:56:55.262224 | orchestrator | 2025-09-19 06:56:55.262238 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 06:56:55.262249 | orchestrator | Friday 19 September 2025 06:56:51 +0000 (0:00:00.161) 0:00:37.103 ****** 2025-09-19 06:56:55.262261 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.262273 | orchestrator | 2025-09-19 06:56:55.262284 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 06:56:55.262295 | orchestrator | Friday 19 September 2025 06:56:51 +0000 (0:00:00.116) 0:00:37.220 ****** 2025-09-19 06:56:55.262306 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.262317 | orchestrator | 2025-09-19 06:56:55.262328 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 06:56:55.262339 | orchestrator | Friday 19 September 2025 06:56:51 +0000 (0:00:00.159) 0:00:37.379 ****** 2025-09-19 06:56:55.262350 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.262361 | orchestrator | 2025-09-19 06:56:55.262372 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 06:56:55.262383 | orchestrator | Friday 19 September 2025 06:56:52 +0000 (0:00:00.133) 0:00:37.513 ****** 2025-09-19 06:56:55.262394 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:56:55.262405 | orchestrator | 2025-09-19 06:56:55.262417 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 06:56:55.262427 | orchestrator | Friday 19 September 2025 06:56:52 +0000 (0:00:00.255) 0:00:37.768 ****** 2025-09-19 06:56:55.262439 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2af2e838-b751-5a2f-ab09-cbc0dc745073'}}) 2025-09-19 06:56:55.262451 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '03228564-3151-5027-920d-737061be0eca'}}) 2025-09-19 06:56:55.262463 | orchestrator | 2025-09-19 06:56:55.262474 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 06:56:55.262485 | orchestrator | Friday 19 September 2025 06:56:52 +0000 (0:00:00.154) 0:00:37.923 ****** 2025-09-19 06:56:55.262526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2af2e838-b751-5a2f-ab09-cbc0dc745073'}})  2025-09-19 06:56:55.262541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '03228564-3151-5027-920d-737061be0eca'}})  2025-09-19 06:56:55.262554 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.262566 | orchestrator | 2025-09-19 06:56:55.262596 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 06:56:55.262608 | orchestrator | Friday 19 September 2025 06:56:52 +0000 (0:00:00.144) 0:00:38.067 ****** 2025-09-19 06:56:55.262621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2af2e838-b751-5a2f-ab09-cbc0dc745073'}})  2025-09-19 06:56:55.262634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '03228564-3151-5027-920d-737061be0eca'}})  2025-09-19 06:56:55.262667 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.262680 | orchestrator | 2025-09-19 06:56:55.262693 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 06:56:55.262705 | orchestrator | Friday 19 September 2025 06:56:52 +0000 (0:00:00.137) 0:00:38.204 ****** 2025-09-19 06:56:55.262718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2af2e838-b751-5a2f-ab09-cbc0dc745073'}})  2025-09-19 06:56:55.262730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '03228564-3151-5027-920d-737061be0eca'}})  2025-09-19 06:56:55.262743 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.262756 | orchestrator | 2025-09-19 06:56:55.262768 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 06:56:55.262781 | orchestrator | Friday 19 September 2025 06:56:52 +0000 (0:00:00.147) 0:00:38.352 ****** 2025-09-19 06:56:55.262794 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:56:55.262807 | orchestrator | 2025-09-19 06:56:55.262819 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 06:56:55.262831 | orchestrator | Friday 19 September 2025 06:56:53 +0000 (0:00:00.129) 0:00:38.482 ****** 2025-09-19 06:56:55.262844 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:56:55.262857 | orchestrator | 2025-09-19 06:56:55.262869 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 06:56:55.262882 | orchestrator | Friday 19 September 2025 06:56:53 +0000 (0:00:00.114) 0:00:38.597 ****** 2025-09-19 06:56:55.262894 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.262905 | orchestrator | 2025-09-19 06:56:55.262916 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 06:56:55.262927 | orchestrator | Friday 19 September 2025 06:56:53 +0000 (0:00:00.095) 0:00:38.693 ****** 2025-09-19 06:56:55.262938 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.262949 | orchestrator | 2025-09-19 06:56:55.262960 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 06:56:55.262972 | orchestrator | Friday 19 September 2025 06:56:53 +0000 (0:00:00.131) 0:00:38.825 ****** 2025-09-19 06:56:55.262983 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.262994 | orchestrator | 2025-09-19 06:56:55.263005 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 06:56:55.263016 | orchestrator | Friday 19 September 2025 06:56:53 +0000 (0:00:00.117) 0:00:38.942 ****** 2025-09-19 06:56:55.263028 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 06:56:55.263039 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:56:55.263050 | orchestrator |  "sdb": { 2025-09-19 06:56:55.263062 | orchestrator |  "osd_lvm_uuid": "2af2e838-b751-5a2f-ab09-cbc0dc745073" 2025-09-19 06:56:55.263091 | orchestrator |  }, 2025-09-19 06:56:55.263103 | orchestrator |  "sdc": { 2025-09-19 06:56:55.263114 | orchestrator |  "osd_lvm_uuid": "03228564-3151-5027-920d-737061be0eca" 2025-09-19 06:56:55.263125 | orchestrator |  } 2025-09-19 06:56:55.263136 | orchestrator |  } 2025-09-19 06:56:55.263147 | orchestrator | } 2025-09-19 06:56:55.263159 | orchestrator | 2025-09-19 06:56:55.263170 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 06:56:55.263181 | orchestrator | Friday 19 September 2025 06:56:53 +0000 (0:00:00.121) 0:00:39.063 ****** 2025-09-19 06:56:55.263192 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.263203 | orchestrator | 2025-09-19 06:56:55.263214 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 06:56:55.263225 | orchestrator | Friday 19 September 2025 06:56:53 +0000 (0:00:00.132) 0:00:39.196 ****** 2025-09-19 06:56:55.263236 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.263247 | orchestrator | 2025-09-19 06:56:55.263264 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 06:56:55.263293 | orchestrator | Friday 19 September 2025 06:56:54 +0000 (0:00:00.256) 0:00:39.453 ****** 2025-09-19 06:56:55.263311 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:55.263330 | orchestrator | 2025-09-19 06:56:55.263348 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 06:56:55.263365 | orchestrator | Friday 19 September 2025 06:56:54 +0000 (0:00:00.124) 0:00:39.577 ****** 2025-09-19 06:56:55.263382 | orchestrator | changed: [testbed-node-5] => { 2025-09-19 06:56:55.263398 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 06:56:55.263415 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:56:55.263432 | orchestrator |  "sdb": { 2025-09-19 06:56:55.263450 | orchestrator |  "osd_lvm_uuid": "2af2e838-b751-5a2f-ab09-cbc0dc745073" 2025-09-19 06:56:55.263467 | orchestrator |  }, 2025-09-19 06:56:55.263487 | orchestrator |  "sdc": { 2025-09-19 06:56:55.263546 | orchestrator |  "osd_lvm_uuid": "03228564-3151-5027-920d-737061be0eca" 2025-09-19 06:56:55.263565 | orchestrator |  } 2025-09-19 06:56:55.263584 | orchestrator |  }, 2025-09-19 06:56:55.263602 | orchestrator |  "lvm_volumes": [ 2025-09-19 06:56:55.263621 | orchestrator |  { 2025-09-19 06:56:55.263639 | orchestrator |  "data": "osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073", 2025-09-19 06:56:55.263658 | orchestrator |  "data_vg": "ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073" 2025-09-19 06:56:55.263670 | orchestrator |  }, 2025-09-19 06:56:55.263681 | orchestrator |  { 2025-09-19 06:56:55.263692 | orchestrator |  "data": "osd-block-03228564-3151-5027-920d-737061be0eca", 2025-09-19 06:56:55.263703 | orchestrator |  "data_vg": "ceph-03228564-3151-5027-920d-737061be0eca" 2025-09-19 06:56:55.263714 | orchestrator |  } 2025-09-19 06:56:55.263725 | orchestrator |  ] 2025-09-19 06:56:55.263736 | orchestrator |  } 2025-09-19 06:56:55.263747 | orchestrator | } 2025-09-19 06:56:55.263763 | orchestrator | 2025-09-19 06:56:55.263774 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 06:56:55.263786 | orchestrator | Friday 19 September 2025 06:56:54 +0000 (0:00:00.197) 0:00:39.774 ****** 2025-09-19 06:56:55.263797 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 06:56:55.263808 | orchestrator | 2025-09-19 06:56:55.263819 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:56:55.263843 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 06:56:55.263857 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 06:56:55.263868 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 06:56:55.263879 | orchestrator | 2025-09-19 06:56:55.263890 | orchestrator | 2025-09-19 06:56:55.263901 | orchestrator | 2025-09-19 06:56:55.263912 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:56:55.263923 | orchestrator | Friday 19 September 2025 06:56:55 +0000 (0:00:00.848) 0:00:40.623 ****** 2025-09-19 06:56:55.263934 | orchestrator | =============================================================================== 2025-09-19 06:56:55.263946 | orchestrator | Write configuration file ------------------------------------------------ 4.06s 2025-09-19 06:56:55.263956 | orchestrator | Add known partitions to the list of available block devices ------------- 1.28s 2025-09-19 06:56:55.263968 | orchestrator | Add known links to the list of available block devices ------------------ 1.12s 2025-09-19 06:56:55.263978 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2025-09-19 06:56:55.263989 | orchestrator | Get initial list of available block devices ----------------------------- 1.02s 2025-09-19 06:56:55.264000 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2025-09-19 06:56:55.264021 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2025-09-19 06:56:55.264033 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-09-19 06:56:55.264044 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-09-19 06:56:55.264055 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-09-19 06:56:55.264066 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.66s 2025-09-19 06:56:55.264077 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-19 06:56:55.264088 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-19 06:56:55.264099 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-09-19 06:56:55.264121 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.60s 2025-09-19 06:56:55.486175 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-09-19 06:56:55.486269 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2025-09-19 06:56:55.486282 | orchestrator | Print DB devices -------------------------------------------------------- 0.53s 2025-09-19 06:56:55.486292 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.53s 2025-09-19 06:56:55.486302 | orchestrator | Set WAL devices config data --------------------------------------------- 0.51s 2025-09-19 06:57:18.231780 | orchestrator | 2025-09-19 06:57:18 | INFO  | Task d4a6b756-0ef3-4f15-9d45-3e4afa343b2a (sync inventory) is running in background. Output coming soon. 2025-09-19 06:57:35.709892 | orchestrator | 2025-09-19 06:57:19 | INFO  | Starting group_vars file reorganization 2025-09-19 06:57:35.709994 | orchestrator | 2025-09-19 06:57:19 | INFO  | Moved 0 file(s) to their respective directories 2025-09-19 06:57:35.710007 | orchestrator | 2025-09-19 06:57:19 | INFO  | Group_vars file reorganization completed 2025-09-19 06:57:35.710070 | orchestrator | 2025-09-19 06:57:21 | INFO  | Starting variable preparation from inventory 2025-09-19 06:57:35.710081 | orchestrator | 2025-09-19 06:57:22 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-19 06:57:35.710090 | orchestrator | 2025-09-19 06:57:22 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-19 06:57:35.710100 | orchestrator | 2025-09-19 06:57:22 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-19 06:57:35.710108 | orchestrator | 2025-09-19 06:57:22 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-19 06:57:35.710117 | orchestrator | 2025-09-19 06:57:22 | INFO  | Variable preparation completed 2025-09-19 06:57:35.710125 | orchestrator | 2025-09-19 06:57:23 | INFO  | Starting inventory overwrite handling 2025-09-19 06:57:35.710134 | orchestrator | 2025-09-19 06:57:23 | INFO  | Handling group overwrites in 99-overwrite 2025-09-19 06:57:35.710142 | orchestrator | 2025-09-19 06:57:23 | INFO  | Removing group frr:children from 60-generic 2025-09-19 06:57:35.710151 | orchestrator | 2025-09-19 06:57:23 | INFO  | Removing group storage:children from 50-kolla 2025-09-19 06:57:35.710160 | orchestrator | 2025-09-19 06:57:23 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-19 06:57:35.710168 | orchestrator | 2025-09-19 06:57:23 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-19 06:57:35.710177 | orchestrator | 2025-09-19 06:57:23 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-19 06:57:35.710185 | orchestrator | 2025-09-19 06:57:23 | INFO  | Handling group overwrites in 20-roles 2025-09-19 06:57:35.710193 | orchestrator | 2025-09-19 06:57:23 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-19 06:57:35.710225 | orchestrator | 2025-09-19 06:57:23 | INFO  | Removed 6 group(s) in total 2025-09-19 06:57:35.710233 | orchestrator | 2025-09-19 06:57:23 | INFO  | Inventory overwrite handling completed 2025-09-19 06:57:35.710241 | orchestrator | 2025-09-19 06:57:24 | INFO  | Starting merge of inventory files 2025-09-19 06:57:35.710249 | orchestrator | 2025-09-19 06:57:24 | INFO  | Inventory files merged successfully 2025-09-19 06:57:35.710257 | orchestrator | 2025-09-19 06:57:28 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-19 06:57:35.710266 | orchestrator | 2025-09-19 06:57:34 | INFO  | Successfully wrote ClusterShell configuration 2025-09-19 06:57:35.710274 | orchestrator | [master cadab1a] 2025-09-19-06-57 2025-09-19 06:57:35.710284 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-19 06:57:37.890000 | orchestrator | 2025-09-19 06:57:37 | INFO  | Task 1a3fa7cf-e913-4554-af7d-e0ef26123e56 (ceph-create-lvm-devices) was prepared for execution. 2025-09-19 06:57:37.890150 | orchestrator | 2025-09-19 06:57:37 | INFO  | It takes a moment until task 1a3fa7cf-e913-4554-af7d-e0ef26123e56 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-19 06:57:49.270218 | orchestrator | 2025-09-19 06:57:49.270306 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 06:57:49.270322 | orchestrator | 2025-09-19 06:57:49.270334 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:57:49.270346 | orchestrator | Friday 19 September 2025 06:57:42 +0000 (0:00:00.328) 0:00:00.328 ****** 2025-09-19 06:57:49.270358 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 06:57:49.270369 | orchestrator | 2025-09-19 06:57:49.270381 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:57:49.270392 | orchestrator | Friday 19 September 2025 06:57:42 +0000 (0:00:00.251) 0:00:00.579 ****** 2025-09-19 06:57:49.270404 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:57:49.270416 | orchestrator | 2025-09-19 06:57:49.270427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.270438 | orchestrator | Friday 19 September 2025 06:57:42 +0000 (0:00:00.252) 0:00:00.831 ****** 2025-09-19 06:57:49.270461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 06:57:49.270510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 06:57:49.270523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 06:57:49.270534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 06:57:49.270546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 06:57:49.270557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 06:57:49.270568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 06:57:49.270579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 06:57:49.270591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 06:57:49.270602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 06:57:49.270614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 06:57:49.270625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 06:57:49.270636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 06:57:49.270648 | orchestrator | 2025-09-19 06:57:49.270659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.270690 | orchestrator | Friday 19 September 2025 06:57:43 +0000 (0:00:00.426) 0:00:01.258 ****** 2025-09-19 06:57:49.270702 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.270714 | orchestrator | 2025-09-19 06:57:49.270725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.270750 | orchestrator | Friday 19 September 2025 06:57:43 +0000 (0:00:00.460) 0:00:01.719 ****** 2025-09-19 06:57:49.270762 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.270774 | orchestrator | 2025-09-19 06:57:49.270787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.270799 | orchestrator | Friday 19 September 2025 06:57:43 +0000 (0:00:00.191) 0:00:01.910 ****** 2025-09-19 06:57:49.270811 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.270824 | orchestrator | 2025-09-19 06:57:49.270841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.270854 | orchestrator | Friday 19 September 2025 06:57:43 +0000 (0:00:00.210) 0:00:02.121 ****** 2025-09-19 06:57:49.270866 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.270879 | orchestrator | 2025-09-19 06:57:49.270892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.270904 | orchestrator | Friday 19 September 2025 06:57:44 +0000 (0:00:00.206) 0:00:02.327 ****** 2025-09-19 06:57:49.270917 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.270930 | orchestrator | 2025-09-19 06:57:49.270942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.270954 | orchestrator | Friday 19 September 2025 06:57:44 +0000 (0:00:00.183) 0:00:02.510 ****** 2025-09-19 06:57:49.270967 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.270979 | orchestrator | 2025-09-19 06:57:49.270992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.271004 | orchestrator | Friday 19 September 2025 06:57:44 +0000 (0:00:00.202) 0:00:02.713 ****** 2025-09-19 06:57:49.271016 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.271029 | orchestrator | 2025-09-19 06:57:49.271041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.271054 | orchestrator | Friday 19 September 2025 06:57:44 +0000 (0:00:00.189) 0:00:02.903 ****** 2025-09-19 06:57:49.271067 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.271079 | orchestrator | 2025-09-19 06:57:49.271091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.271104 | orchestrator | Friday 19 September 2025 06:57:44 +0000 (0:00:00.179) 0:00:03.082 ****** 2025-09-19 06:57:49.271117 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a) 2025-09-19 06:57:49.271131 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a) 2025-09-19 06:57:49.271143 | orchestrator | 2025-09-19 06:57:49.271154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.271166 | orchestrator | Friday 19 September 2025 06:57:45 +0000 (0:00:00.370) 0:00:03.453 ****** 2025-09-19 06:57:49.271191 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c) 2025-09-19 06:57:49.271204 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c) 2025-09-19 06:57:49.271215 | orchestrator | 2025-09-19 06:57:49.271226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.271237 | orchestrator | Friday 19 September 2025 06:57:45 +0000 (0:00:00.386) 0:00:03.840 ****** 2025-09-19 06:57:49.271249 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8) 2025-09-19 06:57:49.271260 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8) 2025-09-19 06:57:49.271271 | orchestrator | 2025-09-19 06:57:49.271282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.271300 | orchestrator | Friday 19 September 2025 06:57:46 +0000 (0:00:00.492) 0:00:04.333 ****** 2025-09-19 06:57:49.271311 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301) 2025-09-19 06:57:49.271322 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301) 2025-09-19 06:57:49.271334 | orchestrator | 2025-09-19 06:57:49.271345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:49.271356 | orchestrator | Friday 19 September 2025 06:57:46 +0000 (0:00:00.555) 0:00:04.888 ****** 2025-09-19 06:57:49.271367 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:57:49.271378 | orchestrator | 2025-09-19 06:57:49.271389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:49.271400 | orchestrator | Friday 19 September 2025 06:57:47 +0000 (0:00:00.573) 0:00:05.462 ****** 2025-09-19 06:57:49.271411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 06:57:49.271422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 06:57:49.271433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 06:57:49.271444 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 06:57:49.271456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 06:57:49.271492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 06:57:49.271505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 06:57:49.271516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 06:57:49.271527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 06:57:49.271539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 06:57:49.271550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 06:57:49.271561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 06:57:49.271572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 06:57:49.271583 | orchestrator | 2025-09-19 06:57:49.271594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:49.271605 | orchestrator | Friday 19 September 2025 06:57:47 +0000 (0:00:00.370) 0:00:05.833 ****** 2025-09-19 06:57:49.271617 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.271628 | orchestrator | 2025-09-19 06:57:49.271639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:49.271650 | orchestrator | Friday 19 September 2025 06:57:47 +0000 (0:00:00.181) 0:00:06.014 ****** 2025-09-19 06:57:49.271661 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.271673 | orchestrator | 2025-09-19 06:57:49.271684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:49.271695 | orchestrator | Friday 19 September 2025 06:57:48 +0000 (0:00:00.203) 0:00:06.217 ****** 2025-09-19 06:57:49.271706 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.271717 | orchestrator | 2025-09-19 06:57:49.271728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:49.271740 | orchestrator | Friday 19 September 2025 06:57:48 +0000 (0:00:00.205) 0:00:06.422 ****** 2025-09-19 06:57:49.271751 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.271762 | orchestrator | 2025-09-19 06:57:49.271773 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:49.271784 | orchestrator | Friday 19 September 2025 06:57:48 +0000 (0:00:00.202) 0:00:06.625 ****** 2025-09-19 06:57:49.271802 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.271813 | orchestrator | 2025-09-19 06:57:49.271824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:49.271836 | orchestrator | Friday 19 September 2025 06:57:48 +0000 (0:00:00.222) 0:00:06.847 ****** 2025-09-19 06:57:49.271847 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.271858 | orchestrator | 2025-09-19 06:57:49.271869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:49.271880 | orchestrator | Friday 19 September 2025 06:57:48 +0000 (0:00:00.186) 0:00:07.034 ****** 2025-09-19 06:57:49.271891 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:49.271903 | orchestrator | 2025-09-19 06:57:49.271914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:49.271925 | orchestrator | Friday 19 September 2025 06:57:49 +0000 (0:00:00.168) 0:00:07.203 ****** 2025-09-19 06:57:49.271943 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.921419 | orchestrator | 2025-09-19 06:57:56.921549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:56.921566 | orchestrator | Friday 19 September 2025 06:57:49 +0000 (0:00:00.199) 0:00:07.402 ****** 2025-09-19 06:57:56.921577 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 06:57:56.921590 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 06:57:56.921601 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 06:57:56.921612 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 06:57:56.921624 | orchestrator | 2025-09-19 06:57:56.921635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:56.921646 | orchestrator | Friday 19 September 2025 06:57:50 +0000 (0:00:00.917) 0:00:08.320 ****** 2025-09-19 06:57:56.921657 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.921669 | orchestrator | 2025-09-19 06:57:56.921680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:56.921691 | orchestrator | Friday 19 September 2025 06:57:50 +0000 (0:00:00.171) 0:00:08.491 ****** 2025-09-19 06:57:56.921702 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.921713 | orchestrator | 2025-09-19 06:57:56.921725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:56.921736 | orchestrator | Friday 19 September 2025 06:57:50 +0000 (0:00:00.209) 0:00:08.701 ****** 2025-09-19 06:57:56.921747 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.921758 | orchestrator | 2025-09-19 06:57:56.921769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:56.921780 | orchestrator | Friday 19 September 2025 06:57:50 +0000 (0:00:00.188) 0:00:08.890 ****** 2025-09-19 06:57:56.921791 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.921803 | orchestrator | 2025-09-19 06:57:56.921814 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 06:57:56.921825 | orchestrator | Friday 19 September 2025 06:57:50 +0000 (0:00:00.179) 0:00:09.069 ****** 2025-09-19 06:57:56.921836 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.921847 | orchestrator | 2025-09-19 06:57:56.921858 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 06:57:56.921870 | orchestrator | Friday 19 September 2025 06:57:51 +0000 (0:00:00.122) 0:00:09.192 ****** 2025-09-19 06:57:56.921881 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '787edb9c-1668-5795-8146-b6ac8c49142c'}}) 2025-09-19 06:57:56.921892 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af475f18-71a6-5278-b018-36a08189cb1c'}}) 2025-09-19 06:57:56.921904 | orchestrator | 2025-09-19 06:57:56.921915 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 06:57:56.921926 | orchestrator | Friday 19 September 2025 06:57:51 +0000 (0:00:00.201) 0:00:09.393 ****** 2025-09-19 06:57:56.921938 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'}) 2025-09-19 06:57:56.921972 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'}) 2025-09-19 06:57:56.921985 | orchestrator | 2025-09-19 06:57:56.922058 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 06:57:56.922073 | orchestrator | Friday 19 September 2025 06:57:53 +0000 (0:00:01.918) 0:00:11.312 ****** 2025-09-19 06:57:56.922090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:57:56.922105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:57:56.922117 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.922130 | orchestrator | 2025-09-19 06:57:56.922143 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 06:57:56.922155 | orchestrator | Friday 19 September 2025 06:57:53 +0000 (0:00:00.135) 0:00:11.447 ****** 2025-09-19 06:57:56.922168 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'}) 2025-09-19 06:57:56.922181 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'}) 2025-09-19 06:57:56.922194 | orchestrator | 2025-09-19 06:57:56.922206 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 06:57:56.922220 | orchestrator | Friday 19 September 2025 06:57:54 +0000 (0:00:01.455) 0:00:12.903 ****** 2025-09-19 06:57:56.922233 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:57:56.922246 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:57:56.922259 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.922272 | orchestrator | 2025-09-19 06:57:56.922285 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 06:57:56.922298 | orchestrator | Friday 19 September 2025 06:57:54 +0000 (0:00:00.150) 0:00:13.054 ****** 2025-09-19 06:57:56.922311 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.922323 | orchestrator | 2025-09-19 06:57:56.922334 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 06:57:56.922360 | orchestrator | Friday 19 September 2025 06:57:55 +0000 (0:00:00.155) 0:00:13.210 ****** 2025-09-19 06:57:56.922372 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:57:56.922383 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:57:56.922394 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.922405 | orchestrator | 2025-09-19 06:57:56.922417 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 06:57:56.922428 | orchestrator | Friday 19 September 2025 06:57:55 +0000 (0:00:00.350) 0:00:13.560 ****** 2025-09-19 06:57:56.922439 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.922450 | orchestrator | 2025-09-19 06:57:56.922482 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 06:57:56.922505 | orchestrator | Friday 19 September 2025 06:57:55 +0000 (0:00:00.146) 0:00:13.707 ****** 2025-09-19 06:57:56.922523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:57:56.922556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:57:56.922577 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.922595 | orchestrator | 2025-09-19 06:57:56.922614 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 06:57:56.922631 | orchestrator | Friday 19 September 2025 06:57:55 +0000 (0:00:00.177) 0:00:13.884 ****** 2025-09-19 06:57:56.922648 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.922666 | orchestrator | 2025-09-19 06:57:56.922684 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 06:57:56.922703 | orchestrator | Friday 19 September 2025 06:57:55 +0000 (0:00:00.128) 0:00:14.013 ****** 2025-09-19 06:57:56.922723 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:57:56.922742 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:57:56.922761 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.922778 | orchestrator | 2025-09-19 06:57:56.922795 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 06:57:56.922813 | orchestrator | Friday 19 September 2025 06:57:56 +0000 (0:00:00.152) 0:00:14.165 ****** 2025-09-19 06:57:56.922831 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:57:56.922849 | orchestrator | 2025-09-19 06:57:56.922867 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 06:57:56.922884 | orchestrator | Friday 19 September 2025 06:57:56 +0000 (0:00:00.150) 0:00:14.316 ****** 2025-09-19 06:57:56.922903 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:57:56.922932 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:57:56.922953 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.922971 | orchestrator | 2025-09-19 06:57:56.922989 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 06:57:56.923009 | orchestrator | Friday 19 September 2025 06:57:56 +0000 (0:00:00.155) 0:00:14.472 ****** 2025-09-19 06:57:56.923029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:57:56.923047 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:57:56.923065 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.923085 | orchestrator | 2025-09-19 06:57:56.923104 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 06:57:56.923123 | orchestrator | Friday 19 September 2025 06:57:56 +0000 (0:00:00.173) 0:00:14.645 ****** 2025-09-19 06:57:56.923142 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:57:56.923162 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:57:56.923181 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.923198 | orchestrator | 2025-09-19 06:57:56.923216 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 06:57:56.923234 | orchestrator | Friday 19 September 2025 06:57:56 +0000 (0:00:00.149) 0:00:14.794 ****** 2025-09-19 06:57:56.923251 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.923270 | orchestrator | 2025-09-19 06:57:56.923287 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 06:57:56.923319 | orchestrator | Friday 19 September 2025 06:57:56 +0000 (0:00:00.114) 0:00:14.909 ****** 2025-09-19 06:57:56.923338 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:56.923355 | orchestrator | 2025-09-19 06:57:56.923390 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 06:58:03.427270 | orchestrator | Friday 19 September 2025 06:57:56 +0000 (0:00:00.146) 0:00:15.056 ****** 2025-09-19 06:58:03.427367 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.427382 | orchestrator | 2025-09-19 06:58:03.427393 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 06:58:03.427404 | orchestrator | Friday 19 September 2025 06:57:57 +0000 (0:00:00.144) 0:00:15.200 ****** 2025-09-19 06:58:03.427415 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:58:03.427425 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 06:58:03.427436 | orchestrator | } 2025-09-19 06:58:03.427447 | orchestrator | 2025-09-19 06:58:03.427457 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 06:58:03.427519 | orchestrator | Friday 19 September 2025 06:57:57 +0000 (0:00:00.365) 0:00:15.565 ****** 2025-09-19 06:58:03.427529 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:58:03.427540 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 06:58:03.427550 | orchestrator | } 2025-09-19 06:58:03.427560 | orchestrator | 2025-09-19 06:58:03.427571 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 06:58:03.427581 | orchestrator | Friday 19 September 2025 06:57:57 +0000 (0:00:00.161) 0:00:15.726 ****** 2025-09-19 06:58:03.427591 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:58:03.427601 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 06:58:03.427612 | orchestrator | } 2025-09-19 06:58:03.427622 | orchestrator | 2025-09-19 06:58:03.427633 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 06:58:03.427643 | orchestrator | Friday 19 September 2025 06:57:57 +0000 (0:00:00.139) 0:00:15.866 ****** 2025-09-19 06:58:03.427653 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:58:03.427664 | orchestrator | 2025-09-19 06:58:03.427674 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 06:58:03.427684 | orchestrator | Friday 19 September 2025 06:57:58 +0000 (0:00:00.668) 0:00:16.535 ****** 2025-09-19 06:58:03.427694 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:58:03.427704 | orchestrator | 2025-09-19 06:58:03.427714 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 06:58:03.427724 | orchestrator | Friday 19 September 2025 06:57:58 +0000 (0:00:00.544) 0:00:17.080 ****** 2025-09-19 06:58:03.427735 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:58:03.427745 | orchestrator | 2025-09-19 06:58:03.427755 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 06:58:03.427765 | orchestrator | Friday 19 September 2025 06:57:59 +0000 (0:00:00.533) 0:00:17.613 ****** 2025-09-19 06:58:03.427775 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:58:03.427785 | orchestrator | 2025-09-19 06:58:03.427796 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 06:58:03.427806 | orchestrator | Friday 19 September 2025 06:57:59 +0000 (0:00:00.162) 0:00:17.776 ****** 2025-09-19 06:58:03.427816 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.427829 | orchestrator | 2025-09-19 06:58:03.427840 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 06:58:03.427852 | orchestrator | Friday 19 September 2025 06:57:59 +0000 (0:00:00.114) 0:00:17.890 ****** 2025-09-19 06:58:03.427863 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.427875 | orchestrator | 2025-09-19 06:58:03.427886 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 06:58:03.427898 | orchestrator | Friday 19 September 2025 06:57:59 +0000 (0:00:00.108) 0:00:17.999 ****** 2025-09-19 06:58:03.427910 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:58:03.427942 | orchestrator |  "vgs_report": { 2025-09-19 06:58:03.427954 | orchestrator |  "vg": [] 2025-09-19 06:58:03.427965 | orchestrator |  } 2025-09-19 06:58:03.427977 | orchestrator | } 2025-09-19 06:58:03.427988 | orchestrator | 2025-09-19 06:58:03.428000 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 06:58:03.428012 | orchestrator | Friday 19 September 2025 06:58:00 +0000 (0:00:00.154) 0:00:18.153 ****** 2025-09-19 06:58:03.428023 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428035 | orchestrator | 2025-09-19 06:58:03.428046 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 06:58:03.428057 | orchestrator | Friday 19 September 2025 06:58:00 +0000 (0:00:00.147) 0:00:18.300 ****** 2025-09-19 06:58:03.428068 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428080 | orchestrator | 2025-09-19 06:58:03.428091 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 06:58:03.428102 | orchestrator | Friday 19 September 2025 06:58:00 +0000 (0:00:00.139) 0:00:18.440 ****** 2025-09-19 06:58:03.428114 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428125 | orchestrator | 2025-09-19 06:58:03.428136 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 06:58:03.428148 | orchestrator | Friday 19 September 2025 06:58:00 +0000 (0:00:00.350) 0:00:18.790 ****** 2025-09-19 06:58:03.428160 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428170 | orchestrator | 2025-09-19 06:58:03.428180 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 06:58:03.428190 | orchestrator | Friday 19 September 2025 06:58:00 +0000 (0:00:00.136) 0:00:18.926 ****** 2025-09-19 06:58:03.428200 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428210 | orchestrator | 2025-09-19 06:58:03.428236 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 06:58:03.428246 | orchestrator | Friday 19 September 2025 06:58:00 +0000 (0:00:00.146) 0:00:19.073 ****** 2025-09-19 06:58:03.428257 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428267 | orchestrator | 2025-09-19 06:58:03.428276 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 06:58:03.428286 | orchestrator | Friday 19 September 2025 06:58:01 +0000 (0:00:00.151) 0:00:19.224 ****** 2025-09-19 06:58:03.428296 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428306 | orchestrator | 2025-09-19 06:58:03.428316 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 06:58:03.428326 | orchestrator | Friday 19 September 2025 06:58:01 +0000 (0:00:00.152) 0:00:19.376 ****** 2025-09-19 06:58:03.428336 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428346 | orchestrator | 2025-09-19 06:58:03.428356 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 06:58:03.428380 | orchestrator | Friday 19 September 2025 06:58:01 +0000 (0:00:00.145) 0:00:19.521 ****** 2025-09-19 06:58:03.428391 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428402 | orchestrator | 2025-09-19 06:58:03.428412 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 06:58:03.428421 | orchestrator | Friday 19 September 2025 06:58:01 +0000 (0:00:00.136) 0:00:19.658 ****** 2025-09-19 06:58:03.428431 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428441 | orchestrator | 2025-09-19 06:58:03.428451 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 06:58:03.428479 | orchestrator | Friday 19 September 2025 06:58:01 +0000 (0:00:00.136) 0:00:19.794 ****** 2025-09-19 06:58:03.428490 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428500 | orchestrator | 2025-09-19 06:58:03.428510 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 06:58:03.428520 | orchestrator | Friday 19 September 2025 06:58:01 +0000 (0:00:00.128) 0:00:19.922 ****** 2025-09-19 06:58:03.428530 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428540 | orchestrator | 2025-09-19 06:58:03.428550 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 06:58:03.428568 | orchestrator | Friday 19 September 2025 06:58:01 +0000 (0:00:00.145) 0:00:20.068 ****** 2025-09-19 06:58:03.428578 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428588 | orchestrator | 2025-09-19 06:58:03.428598 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 06:58:03.428609 | orchestrator | Friday 19 September 2025 06:58:02 +0000 (0:00:00.171) 0:00:20.239 ****** 2025-09-19 06:58:03.428618 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428629 | orchestrator | 2025-09-19 06:58:03.428639 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 06:58:03.428649 | orchestrator | Friday 19 September 2025 06:58:02 +0000 (0:00:00.152) 0:00:20.391 ****** 2025-09-19 06:58:03.428660 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:03.428672 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:03.428682 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428692 | orchestrator | 2025-09-19 06:58:03.428702 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 06:58:03.428712 | orchestrator | Friday 19 September 2025 06:58:02 +0000 (0:00:00.168) 0:00:20.560 ****** 2025-09-19 06:58:03.428722 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:03.428733 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:03.428743 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428753 | orchestrator | 2025-09-19 06:58:03.428763 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 06:58:03.428772 | orchestrator | Friday 19 September 2025 06:58:02 +0000 (0:00:00.346) 0:00:20.906 ****** 2025-09-19 06:58:03.428788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:03.428798 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:03.428808 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428818 | orchestrator | 2025-09-19 06:58:03.428828 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 06:58:03.428838 | orchestrator | Friday 19 September 2025 06:58:02 +0000 (0:00:00.159) 0:00:21.066 ****** 2025-09-19 06:58:03.428848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:03.428858 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:03.428868 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428878 | orchestrator | 2025-09-19 06:58:03.428888 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 06:58:03.428898 | orchestrator | Friday 19 September 2025 06:58:03 +0000 (0:00:00.166) 0:00:21.232 ****** 2025-09-19 06:58:03.428908 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:03.428918 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:03.428928 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:03.428938 | orchestrator | 2025-09-19 06:58:03.428948 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 06:58:03.428964 | orchestrator | Friday 19 September 2025 06:58:03 +0000 (0:00:00.166) 0:00:21.399 ****** 2025-09-19 06:58:03.428974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:03.428990 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:08.803424 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:08.803611 | orchestrator | 2025-09-19 06:58:08.803638 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 06:58:08.803660 | orchestrator | Friday 19 September 2025 06:58:03 +0000 (0:00:00.162) 0:00:21.561 ****** 2025-09-19 06:58:08.803681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:08.803702 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:08.803722 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:08.803742 | orchestrator | 2025-09-19 06:58:08.803761 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 06:58:08.803781 | orchestrator | Friday 19 September 2025 06:58:03 +0000 (0:00:00.191) 0:00:21.753 ****** 2025-09-19 06:58:08.803801 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:08.803820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:08.803838 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:08.803855 | orchestrator | 2025-09-19 06:58:08.803873 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 06:58:08.803894 | orchestrator | Friday 19 September 2025 06:58:03 +0000 (0:00:00.154) 0:00:21.907 ****** 2025-09-19 06:58:08.803915 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:58:08.803937 | orchestrator | 2025-09-19 06:58:08.803960 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 06:58:08.803980 | orchestrator | Friday 19 September 2025 06:58:04 +0000 (0:00:00.514) 0:00:22.421 ****** 2025-09-19 06:58:08.804002 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:58:08.804022 | orchestrator | 2025-09-19 06:58:08.804043 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 06:58:08.804064 | orchestrator | Friday 19 September 2025 06:58:04 +0000 (0:00:00.505) 0:00:22.927 ****** 2025-09-19 06:58:08.804086 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:58:08.804105 | orchestrator | 2025-09-19 06:58:08.804123 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 06:58:08.804142 | orchestrator | Friday 19 September 2025 06:58:04 +0000 (0:00:00.147) 0:00:23.074 ****** 2025-09-19 06:58:08.804163 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'vg_name': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'}) 2025-09-19 06:58:08.804186 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'vg_name': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'}) 2025-09-19 06:58:08.804206 | orchestrator | 2025-09-19 06:58:08.804227 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 06:58:08.804248 | orchestrator | Friday 19 September 2025 06:58:05 +0000 (0:00:00.209) 0:00:23.284 ****** 2025-09-19 06:58:08.804268 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:08.804288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:08.804337 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:08.804357 | orchestrator | 2025-09-19 06:58:08.804377 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 06:58:08.804397 | orchestrator | Friday 19 September 2025 06:58:05 +0000 (0:00:00.157) 0:00:23.441 ****** 2025-09-19 06:58:08.804417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:08.804437 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:08.804457 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:08.804551 | orchestrator | 2025-09-19 06:58:08.804570 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 06:58:08.804588 | orchestrator | Friday 19 September 2025 06:58:05 +0000 (0:00:00.331) 0:00:23.773 ****** 2025-09-19 06:58:08.804606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'})  2025-09-19 06:58:08.804625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'})  2025-09-19 06:58:08.804643 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:08.804661 | orchestrator | 2025-09-19 06:58:08.804679 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 06:58:08.804696 | orchestrator | Friday 19 September 2025 06:58:05 +0000 (0:00:00.161) 0:00:23.934 ****** 2025-09-19 06:58:08.804714 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:58:08.804732 | orchestrator |  "lvm_report": { 2025-09-19 06:58:08.804749 | orchestrator |  "lv": [ 2025-09-19 06:58:08.804767 | orchestrator |  { 2025-09-19 06:58:08.804810 | orchestrator |  "lv_name": "osd-block-787edb9c-1668-5795-8146-b6ac8c49142c", 2025-09-19 06:58:08.804828 | orchestrator |  "vg_name": "ceph-787edb9c-1668-5795-8146-b6ac8c49142c" 2025-09-19 06:58:08.804847 | orchestrator |  }, 2025-09-19 06:58:08.804866 | orchestrator |  { 2025-09-19 06:58:08.804885 | orchestrator |  "lv_name": "osd-block-af475f18-71a6-5278-b018-36a08189cb1c", 2025-09-19 06:58:08.804904 | orchestrator |  "vg_name": "ceph-af475f18-71a6-5278-b018-36a08189cb1c" 2025-09-19 06:58:08.804924 | orchestrator |  } 2025-09-19 06:58:08.804943 | orchestrator |  ], 2025-09-19 06:58:08.804962 | orchestrator |  "pv": [ 2025-09-19 06:58:08.804980 | orchestrator |  { 2025-09-19 06:58:08.804999 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 06:58:08.805018 | orchestrator |  "vg_name": "ceph-787edb9c-1668-5795-8146-b6ac8c49142c" 2025-09-19 06:58:08.805036 | orchestrator |  }, 2025-09-19 06:58:08.805055 | orchestrator |  { 2025-09-19 06:58:08.805074 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 06:58:08.805093 | orchestrator |  "vg_name": "ceph-af475f18-71a6-5278-b018-36a08189cb1c" 2025-09-19 06:58:08.805112 | orchestrator |  } 2025-09-19 06:58:08.805130 | orchestrator |  ] 2025-09-19 06:58:08.805150 | orchestrator |  } 2025-09-19 06:58:08.805168 | orchestrator | } 2025-09-19 06:58:08.805186 | orchestrator | 2025-09-19 06:58:08.805205 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 06:58:08.805224 | orchestrator | 2025-09-19 06:58:08.805242 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:58:08.805261 | orchestrator | Friday 19 September 2025 06:58:06 +0000 (0:00:00.315) 0:00:24.250 ****** 2025-09-19 06:58:08.805281 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 06:58:08.805301 | orchestrator | 2025-09-19 06:58:08.805336 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:58:08.805355 | orchestrator | Friday 19 September 2025 06:58:06 +0000 (0:00:00.238) 0:00:24.488 ****** 2025-09-19 06:58:08.805375 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:58:08.805393 | orchestrator | 2025-09-19 06:58:08.805411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:08.805429 | orchestrator | Friday 19 September 2025 06:58:06 +0000 (0:00:00.228) 0:00:24.716 ****** 2025-09-19 06:58:08.805491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 06:58:08.805512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 06:58:08.805529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 06:58:08.805546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 06:58:08.805564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 06:58:08.805584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 06:58:08.805602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 06:58:08.805619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 06:58:08.805645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 06:58:08.805663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 06:58:08.805681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 06:58:08.805700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 06:58:08.805717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 06:58:08.805737 | orchestrator | 2025-09-19 06:58:08.805751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:08.805762 | orchestrator | Friday 19 September 2025 06:58:07 +0000 (0:00:00.455) 0:00:25.172 ****** 2025-09-19 06:58:08.805774 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:08.805785 | orchestrator | 2025-09-19 06:58:08.805796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:08.805807 | orchestrator | Friday 19 September 2025 06:58:07 +0000 (0:00:00.208) 0:00:25.380 ****** 2025-09-19 06:58:08.805818 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:08.805829 | orchestrator | 2025-09-19 06:58:08.805840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:08.805851 | orchestrator | Friday 19 September 2025 06:58:07 +0000 (0:00:00.194) 0:00:25.575 ****** 2025-09-19 06:58:08.805863 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:08.805874 | orchestrator | 2025-09-19 06:58:08.805885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:08.805896 | orchestrator | Friday 19 September 2025 06:58:07 +0000 (0:00:00.207) 0:00:25.782 ****** 2025-09-19 06:58:08.805907 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:08.805918 | orchestrator | 2025-09-19 06:58:08.805929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:08.805940 | orchestrator | Friday 19 September 2025 06:58:08 +0000 (0:00:00.566) 0:00:26.348 ****** 2025-09-19 06:58:08.805951 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:08.805962 | orchestrator | 2025-09-19 06:58:08.805974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:08.805985 | orchestrator | Friday 19 September 2025 06:58:08 +0000 (0:00:00.243) 0:00:26.592 ****** 2025-09-19 06:58:08.805996 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:08.806007 | orchestrator | 2025-09-19 06:58:08.806075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:08.806100 | orchestrator | Friday 19 September 2025 06:58:08 +0000 (0:00:00.170) 0:00:26.762 ****** 2025-09-19 06:58:08.806112 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:08.806123 | orchestrator | 2025-09-19 06:58:08.806197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:18.652355 | orchestrator | Friday 19 September 2025 06:58:08 +0000 (0:00:00.176) 0:00:26.939 ****** 2025-09-19 06:58:18.652448 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.652503 | orchestrator | 2025-09-19 06:58:18.652516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:18.652528 | orchestrator | Friday 19 September 2025 06:58:08 +0000 (0:00:00.180) 0:00:27.120 ****** 2025-09-19 06:58:18.652540 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd) 2025-09-19 06:58:18.652552 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd) 2025-09-19 06:58:18.652564 | orchestrator | 2025-09-19 06:58:18.652575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:18.652587 | orchestrator | Friday 19 September 2025 06:58:09 +0000 (0:00:00.386) 0:00:27.506 ****** 2025-09-19 06:58:18.652598 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97) 2025-09-19 06:58:18.652609 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97) 2025-09-19 06:58:18.652620 | orchestrator | 2025-09-19 06:58:18.652631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:18.652643 | orchestrator | Friday 19 September 2025 06:58:09 +0000 (0:00:00.392) 0:00:27.898 ****** 2025-09-19 06:58:18.652654 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508) 2025-09-19 06:58:18.652665 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508) 2025-09-19 06:58:18.652676 | orchestrator | 2025-09-19 06:58:18.652687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:18.652698 | orchestrator | Friday 19 September 2025 06:58:10 +0000 (0:00:00.415) 0:00:28.314 ****** 2025-09-19 06:58:18.652709 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b) 2025-09-19 06:58:18.652721 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b) 2025-09-19 06:58:18.652732 | orchestrator | 2025-09-19 06:58:18.652743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:18.652754 | orchestrator | Friday 19 September 2025 06:58:10 +0000 (0:00:00.397) 0:00:28.712 ****** 2025-09-19 06:58:18.652765 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:58:18.652776 | orchestrator | 2025-09-19 06:58:18.652788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.652799 | orchestrator | Friday 19 September 2025 06:58:10 +0000 (0:00:00.332) 0:00:29.044 ****** 2025-09-19 06:58:18.652810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 06:58:18.652835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 06:58:18.652847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 06:58:18.652858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 06:58:18.652869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 06:58:18.652880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 06:58:18.652891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 06:58:18.652924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 06:58:18.652938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 06:58:18.652950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 06:58:18.652962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 06:58:18.652974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 06:58:18.652986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 06:58:18.652999 | orchestrator | 2025-09-19 06:58:18.653011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653023 | orchestrator | Friday 19 September 2025 06:58:11 +0000 (0:00:00.583) 0:00:29.627 ****** 2025-09-19 06:58:18.653036 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653049 | orchestrator | 2025-09-19 06:58:18.653061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653074 | orchestrator | Friday 19 September 2025 06:58:11 +0000 (0:00:00.242) 0:00:29.870 ****** 2025-09-19 06:58:18.653086 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653099 | orchestrator | 2025-09-19 06:58:18.653112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653124 | orchestrator | Friday 19 September 2025 06:58:11 +0000 (0:00:00.214) 0:00:30.084 ****** 2025-09-19 06:58:18.653136 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653148 | orchestrator | 2025-09-19 06:58:18.653161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653173 | orchestrator | Friday 19 September 2025 06:58:12 +0000 (0:00:00.224) 0:00:30.309 ****** 2025-09-19 06:58:18.653185 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653197 | orchestrator | 2025-09-19 06:58:18.653226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653240 | orchestrator | Friday 19 September 2025 06:58:12 +0000 (0:00:00.193) 0:00:30.502 ****** 2025-09-19 06:58:18.653254 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653265 | orchestrator | 2025-09-19 06:58:18.653276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653287 | orchestrator | Friday 19 September 2025 06:58:12 +0000 (0:00:00.199) 0:00:30.702 ****** 2025-09-19 06:58:18.653297 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653308 | orchestrator | 2025-09-19 06:58:18.653320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653330 | orchestrator | Friday 19 September 2025 06:58:12 +0000 (0:00:00.235) 0:00:30.937 ****** 2025-09-19 06:58:18.653341 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653352 | orchestrator | 2025-09-19 06:58:18.653364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653375 | orchestrator | Friday 19 September 2025 06:58:13 +0000 (0:00:00.244) 0:00:31.182 ****** 2025-09-19 06:58:18.653385 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653396 | orchestrator | 2025-09-19 06:58:18.653407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653418 | orchestrator | Friday 19 September 2025 06:58:13 +0000 (0:00:00.213) 0:00:31.395 ****** 2025-09-19 06:58:18.653429 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 06:58:18.653440 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 06:58:18.653451 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 06:58:18.653480 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 06:58:18.653491 | orchestrator | 2025-09-19 06:58:18.653502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653514 | orchestrator | Friday 19 September 2025 06:58:14 +0000 (0:00:00.821) 0:00:32.217 ****** 2025-09-19 06:58:18.653533 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653549 | orchestrator | 2025-09-19 06:58:18.653568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653587 | orchestrator | Friday 19 September 2025 06:58:14 +0000 (0:00:00.192) 0:00:32.410 ****** 2025-09-19 06:58:18.653615 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653634 | orchestrator | 2025-09-19 06:58:18.653653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653671 | orchestrator | Friday 19 September 2025 06:58:14 +0000 (0:00:00.209) 0:00:32.620 ****** 2025-09-19 06:58:18.653689 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653705 | orchestrator | 2025-09-19 06:58:18.653724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:18.653741 | orchestrator | Friday 19 September 2025 06:58:14 +0000 (0:00:00.477) 0:00:33.097 ****** 2025-09-19 06:58:18.653760 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653778 | orchestrator | 2025-09-19 06:58:18.653797 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 06:58:18.653813 | orchestrator | Friday 19 September 2025 06:58:15 +0000 (0:00:00.205) 0:00:33.302 ****** 2025-09-19 06:58:18.653824 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.653835 | orchestrator | 2025-09-19 06:58:18.653846 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 06:58:18.653858 | orchestrator | Friday 19 September 2025 06:58:15 +0000 (0:00:00.135) 0:00:33.438 ****** 2025-09-19 06:58:18.653869 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5631a8c0-2403-5b6d-b4ab-3f734fe52f75'}}) 2025-09-19 06:58:18.653880 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '32fceb46-e08d-5445-84d6-a85b98e59ab0'}}) 2025-09-19 06:58:18.653891 | orchestrator | 2025-09-19 06:58:18.653902 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 06:58:18.653913 | orchestrator | Friday 19 September 2025 06:58:15 +0000 (0:00:00.178) 0:00:33.617 ****** 2025-09-19 06:58:18.653925 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'}) 2025-09-19 06:58:18.653937 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'}) 2025-09-19 06:58:18.653948 | orchestrator | 2025-09-19 06:58:18.653959 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 06:58:18.653970 | orchestrator | Friday 19 September 2025 06:58:17 +0000 (0:00:01.757) 0:00:35.374 ****** 2025-09-19 06:58:18.653981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:18.653993 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:18.654004 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:18.654074 | orchestrator | 2025-09-19 06:58:18.654090 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 06:58:18.654101 | orchestrator | Friday 19 September 2025 06:58:17 +0000 (0:00:00.145) 0:00:35.520 ****** 2025-09-19 06:58:18.654112 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'}) 2025-09-19 06:58:18.654157 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'}) 2025-09-19 06:58:18.654168 | orchestrator | 2025-09-19 06:58:18.654192 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 06:58:24.075264 | orchestrator | Friday 19 September 2025 06:58:18 +0000 (0:00:01.265) 0:00:36.786 ****** 2025-09-19 06:58:24.075564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:24.075600 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:24.075621 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.075643 | orchestrator | 2025-09-19 06:58:24.075664 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 06:58:24.075684 | orchestrator | Friday 19 September 2025 06:58:18 +0000 (0:00:00.165) 0:00:36.952 ****** 2025-09-19 06:58:24.075703 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.075717 | orchestrator | 2025-09-19 06:58:24.075736 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 06:58:24.075754 | orchestrator | Friday 19 September 2025 06:58:18 +0000 (0:00:00.129) 0:00:37.082 ****** 2025-09-19 06:58:24.075772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:24.075812 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:24.075827 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.075839 | orchestrator | 2025-09-19 06:58:24.075850 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 06:58:24.075861 | orchestrator | Friday 19 September 2025 06:58:19 +0000 (0:00:00.154) 0:00:37.236 ****** 2025-09-19 06:58:24.075872 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.075882 | orchestrator | 2025-09-19 06:58:24.075892 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 06:58:24.075902 | orchestrator | Friday 19 September 2025 06:58:19 +0000 (0:00:00.116) 0:00:37.352 ****** 2025-09-19 06:58:24.075912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:24.075922 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:24.075932 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.075942 | orchestrator | 2025-09-19 06:58:24.075952 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 06:58:24.075962 | orchestrator | Friday 19 September 2025 06:58:19 +0000 (0:00:00.150) 0:00:37.502 ****** 2025-09-19 06:58:24.075972 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.075982 | orchestrator | 2025-09-19 06:58:24.075996 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 06:58:24.076006 | orchestrator | Friday 19 September 2025 06:58:19 +0000 (0:00:00.246) 0:00:37.749 ****** 2025-09-19 06:58:24.076016 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:24.076026 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:24.076036 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.076046 | orchestrator | 2025-09-19 06:58:24.076056 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 06:58:24.076066 | orchestrator | Friday 19 September 2025 06:58:19 +0000 (0:00:00.146) 0:00:37.895 ****** 2025-09-19 06:58:24.076076 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:58:24.076087 | orchestrator | 2025-09-19 06:58:24.076097 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 06:58:24.076107 | orchestrator | Friday 19 September 2025 06:58:19 +0000 (0:00:00.132) 0:00:38.028 ****** 2025-09-19 06:58:24.076128 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:24.076138 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:24.076149 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.076158 | orchestrator | 2025-09-19 06:58:24.076168 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 06:58:24.076178 | orchestrator | Friday 19 September 2025 06:58:20 +0000 (0:00:00.138) 0:00:38.166 ****** 2025-09-19 06:58:24.076188 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:24.076198 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:24.076208 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.076218 | orchestrator | 2025-09-19 06:58:24.076228 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 06:58:24.076238 | orchestrator | Friday 19 September 2025 06:58:20 +0000 (0:00:00.139) 0:00:38.305 ****** 2025-09-19 06:58:24.076266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:24.076276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:24.076286 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.076296 | orchestrator | 2025-09-19 06:58:24.076306 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 06:58:24.076316 | orchestrator | Friday 19 September 2025 06:58:20 +0000 (0:00:00.154) 0:00:38.460 ****** 2025-09-19 06:58:24.076326 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.076336 | orchestrator | 2025-09-19 06:58:24.076345 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 06:58:24.076355 | orchestrator | Friday 19 September 2025 06:58:20 +0000 (0:00:00.143) 0:00:38.603 ****** 2025-09-19 06:58:24.076365 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.076375 | orchestrator | 2025-09-19 06:58:24.076385 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 06:58:24.076395 | orchestrator | Friday 19 September 2025 06:58:20 +0000 (0:00:00.143) 0:00:38.747 ****** 2025-09-19 06:58:24.076404 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.076414 | orchestrator | 2025-09-19 06:58:24.076424 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 06:58:24.076434 | orchestrator | Friday 19 September 2025 06:58:20 +0000 (0:00:00.145) 0:00:38.892 ****** 2025-09-19 06:58:24.076444 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:58:24.076488 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 06:58:24.076514 | orchestrator | } 2025-09-19 06:58:24.076531 | orchestrator | 2025-09-19 06:58:24.076547 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 06:58:24.076562 | orchestrator | Friday 19 September 2025 06:58:20 +0000 (0:00:00.143) 0:00:39.035 ****** 2025-09-19 06:58:24.076577 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:58:24.076593 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 06:58:24.076611 | orchestrator | } 2025-09-19 06:58:24.076625 | orchestrator | 2025-09-19 06:58:24.076635 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 06:58:24.076645 | orchestrator | Friday 19 September 2025 06:58:21 +0000 (0:00:00.157) 0:00:39.193 ****** 2025-09-19 06:58:24.076655 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:58:24.076665 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 06:58:24.076675 | orchestrator | } 2025-09-19 06:58:24.076695 | orchestrator | 2025-09-19 06:58:24.076705 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 06:58:24.076715 | orchestrator | Friday 19 September 2025 06:58:21 +0000 (0:00:00.161) 0:00:39.354 ****** 2025-09-19 06:58:24.076725 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:58:24.076735 | orchestrator | 2025-09-19 06:58:24.076745 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 06:58:24.076755 | orchestrator | Friday 19 September 2025 06:58:21 +0000 (0:00:00.719) 0:00:40.074 ****** 2025-09-19 06:58:24.076764 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:58:24.076774 | orchestrator | 2025-09-19 06:58:24.076790 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 06:58:24.076800 | orchestrator | Friday 19 September 2025 06:58:22 +0000 (0:00:00.528) 0:00:40.602 ****** 2025-09-19 06:58:24.076810 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:58:24.076820 | orchestrator | 2025-09-19 06:58:24.076830 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 06:58:24.076840 | orchestrator | Friday 19 September 2025 06:58:23 +0000 (0:00:00.553) 0:00:41.156 ****** 2025-09-19 06:58:24.076850 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:58:24.076860 | orchestrator | 2025-09-19 06:58:24.076870 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 06:58:24.076879 | orchestrator | Friday 19 September 2025 06:58:23 +0000 (0:00:00.162) 0:00:41.318 ****** 2025-09-19 06:58:24.076889 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.076899 | orchestrator | 2025-09-19 06:58:24.076909 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 06:58:24.076919 | orchestrator | Friday 19 September 2025 06:58:23 +0000 (0:00:00.121) 0:00:41.440 ****** 2025-09-19 06:58:24.076929 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.076939 | orchestrator | 2025-09-19 06:58:24.076948 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 06:58:24.076958 | orchestrator | Friday 19 September 2025 06:58:23 +0000 (0:00:00.110) 0:00:41.550 ****** 2025-09-19 06:58:24.076971 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:58:24.076987 | orchestrator |  "vgs_report": { 2025-09-19 06:58:24.077004 | orchestrator |  "vg": [] 2025-09-19 06:58:24.077021 | orchestrator |  } 2025-09-19 06:58:24.077037 | orchestrator | } 2025-09-19 06:58:24.077052 | orchestrator | 2025-09-19 06:58:24.077063 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 06:58:24.077072 | orchestrator | Friday 19 September 2025 06:58:23 +0000 (0:00:00.145) 0:00:41.695 ****** 2025-09-19 06:58:24.077082 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.077092 | orchestrator | 2025-09-19 06:58:24.077102 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 06:58:24.077112 | orchestrator | Friday 19 September 2025 06:58:23 +0000 (0:00:00.125) 0:00:41.820 ****** 2025-09-19 06:58:24.077128 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.077143 | orchestrator | 2025-09-19 06:58:24.077158 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 06:58:24.077173 | orchestrator | Friday 19 September 2025 06:58:23 +0000 (0:00:00.125) 0:00:41.946 ****** 2025-09-19 06:58:24.077188 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.077203 | orchestrator | 2025-09-19 06:58:24.077218 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 06:58:24.077233 | orchestrator | Friday 19 September 2025 06:58:23 +0000 (0:00:00.131) 0:00:42.077 ****** 2025-09-19 06:58:24.077248 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:24.077263 | orchestrator | 2025-09-19 06:58:24.077279 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 06:58:24.077309 | orchestrator | Friday 19 September 2025 06:58:24 +0000 (0:00:00.134) 0:00:42.212 ****** 2025-09-19 06:58:28.343268 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343375 | orchestrator | 2025-09-19 06:58:28.343390 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 06:58:28.343427 | orchestrator | Friday 19 September 2025 06:58:24 +0000 (0:00:00.140) 0:00:42.353 ****** 2025-09-19 06:58:28.343440 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343496 | orchestrator | 2025-09-19 06:58:28.343509 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 06:58:28.343521 | orchestrator | Friday 19 September 2025 06:58:24 +0000 (0:00:00.256) 0:00:42.609 ****** 2025-09-19 06:58:28.343532 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343544 | orchestrator | 2025-09-19 06:58:28.343555 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 06:58:28.343566 | orchestrator | Friday 19 September 2025 06:58:24 +0000 (0:00:00.123) 0:00:42.733 ****** 2025-09-19 06:58:28.343577 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343588 | orchestrator | 2025-09-19 06:58:28.343600 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 06:58:28.343611 | orchestrator | Friday 19 September 2025 06:58:24 +0000 (0:00:00.132) 0:00:42.866 ****** 2025-09-19 06:58:28.343622 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343633 | orchestrator | 2025-09-19 06:58:28.343644 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 06:58:28.343655 | orchestrator | Friday 19 September 2025 06:58:24 +0000 (0:00:00.122) 0:00:42.988 ****** 2025-09-19 06:58:28.343666 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343677 | orchestrator | 2025-09-19 06:58:28.343688 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 06:58:28.343699 | orchestrator | Friday 19 September 2025 06:58:24 +0000 (0:00:00.120) 0:00:43.109 ****** 2025-09-19 06:58:28.343710 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343721 | orchestrator | 2025-09-19 06:58:28.343733 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 06:58:28.343744 | orchestrator | Friday 19 September 2025 06:58:25 +0000 (0:00:00.116) 0:00:43.225 ****** 2025-09-19 06:58:28.343755 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343766 | orchestrator | 2025-09-19 06:58:28.343777 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 06:58:28.343788 | orchestrator | Friday 19 September 2025 06:58:25 +0000 (0:00:00.132) 0:00:43.358 ****** 2025-09-19 06:58:28.343801 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343814 | orchestrator | 2025-09-19 06:58:28.343827 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 06:58:28.343840 | orchestrator | Friday 19 September 2025 06:58:25 +0000 (0:00:00.132) 0:00:43.490 ****** 2025-09-19 06:58:28.343853 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343865 | orchestrator | 2025-09-19 06:58:28.343876 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 06:58:28.343887 | orchestrator | Friday 19 September 2025 06:58:25 +0000 (0:00:00.130) 0:00:43.621 ****** 2025-09-19 06:58:28.343914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:28.343928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:28.343940 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.343951 | orchestrator | 2025-09-19 06:58:28.343962 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 06:58:28.343973 | orchestrator | Friday 19 September 2025 06:58:25 +0000 (0:00:00.152) 0:00:43.774 ****** 2025-09-19 06:58:28.343985 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:28.343996 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:28.344014 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.344026 | orchestrator | 2025-09-19 06:58:28.344037 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 06:58:28.344048 | orchestrator | Friday 19 September 2025 06:58:25 +0000 (0:00:00.133) 0:00:43.907 ****** 2025-09-19 06:58:28.344059 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:28.344070 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:28.344082 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.344093 | orchestrator | 2025-09-19 06:58:28.344104 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 06:58:28.344115 | orchestrator | Friday 19 September 2025 06:58:25 +0000 (0:00:00.131) 0:00:44.039 ****** 2025-09-19 06:58:28.344126 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:28.344137 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:28.344148 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.344159 | orchestrator | 2025-09-19 06:58:28.344170 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 06:58:28.344197 | orchestrator | Friday 19 September 2025 06:58:26 +0000 (0:00:00.272) 0:00:44.311 ****** 2025-09-19 06:58:28.344209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:28.344221 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:28.344232 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.344243 | orchestrator | 2025-09-19 06:58:28.344254 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 06:58:28.344265 | orchestrator | Friday 19 September 2025 06:58:26 +0000 (0:00:00.166) 0:00:44.477 ****** 2025-09-19 06:58:28.344276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:28.344287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:28.344298 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.344309 | orchestrator | 2025-09-19 06:58:28.344321 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 06:58:28.344332 | orchestrator | Friday 19 September 2025 06:58:26 +0000 (0:00:00.144) 0:00:44.622 ****** 2025-09-19 06:58:28.344343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:28.344354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:28.344365 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.344376 | orchestrator | 2025-09-19 06:58:28.344387 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 06:58:28.344398 | orchestrator | Friday 19 September 2025 06:58:26 +0000 (0:00:00.144) 0:00:44.766 ****** 2025-09-19 06:58:28.344409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:28.344421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:28.344438 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.344449 | orchestrator | 2025-09-19 06:58:28.344524 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 06:58:28.344573 | orchestrator | Friday 19 September 2025 06:58:26 +0000 (0:00:00.140) 0:00:44.907 ****** 2025-09-19 06:58:28.344586 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:58:28.344598 | orchestrator | 2025-09-19 06:58:28.344609 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 06:58:28.344620 | orchestrator | Friday 19 September 2025 06:58:27 +0000 (0:00:00.508) 0:00:45.415 ****** 2025-09-19 06:58:28.344631 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:58:28.344642 | orchestrator | 2025-09-19 06:58:28.344653 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 06:58:28.344664 | orchestrator | Friday 19 September 2025 06:58:27 +0000 (0:00:00.478) 0:00:45.894 ****** 2025-09-19 06:58:28.344675 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:58:28.344686 | orchestrator | 2025-09-19 06:58:28.344697 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 06:58:28.344709 | orchestrator | Friday 19 September 2025 06:58:27 +0000 (0:00:00.140) 0:00:46.035 ****** 2025-09-19 06:58:28.344720 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'vg_name': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'}) 2025-09-19 06:58:28.344732 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'vg_name': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'}) 2025-09-19 06:58:28.344743 | orchestrator | 2025-09-19 06:58:28.344754 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 06:58:28.344765 | orchestrator | Friday 19 September 2025 06:58:28 +0000 (0:00:00.149) 0:00:46.184 ****** 2025-09-19 06:58:28.344776 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:28.344787 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:28.344798 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:28.344810 | orchestrator | 2025-09-19 06:58:28.344821 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 06:58:28.344832 | orchestrator | Friday 19 September 2025 06:58:28 +0000 (0:00:00.146) 0:00:46.331 ****** 2025-09-19 06:58:28.344843 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:28.344854 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:28.344873 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:34.439701 | orchestrator | 2025-09-19 06:58:34.439817 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 06:58:34.439834 | orchestrator | Friday 19 September 2025 06:58:28 +0000 (0:00:00.149) 0:00:46.480 ****** 2025-09-19 06:58:34.439847 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'})  2025-09-19 06:58:34.439861 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'})  2025-09-19 06:58:34.439872 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:58:34.439885 | orchestrator | 2025-09-19 06:58:34.439896 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 06:58:34.439908 | orchestrator | Friday 19 September 2025 06:58:28 +0000 (0:00:00.143) 0:00:46.623 ****** 2025-09-19 06:58:34.439942 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:58:34.439954 | orchestrator |  "lvm_report": { 2025-09-19 06:58:34.439966 | orchestrator |  "lv": [ 2025-09-19 06:58:34.439977 | orchestrator |  { 2025-09-19 06:58:34.439989 | orchestrator |  "lv_name": "osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0", 2025-09-19 06:58:34.440001 | orchestrator |  "vg_name": "ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0" 2025-09-19 06:58:34.440012 | orchestrator |  }, 2025-09-19 06:58:34.440023 | orchestrator |  { 2025-09-19 06:58:34.440034 | orchestrator |  "lv_name": "osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75", 2025-09-19 06:58:34.440045 | orchestrator |  "vg_name": "ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75" 2025-09-19 06:58:34.440056 | orchestrator |  } 2025-09-19 06:58:34.440067 | orchestrator |  ], 2025-09-19 06:58:34.440078 | orchestrator |  "pv": [ 2025-09-19 06:58:34.440089 | orchestrator |  { 2025-09-19 06:58:34.440100 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 06:58:34.440111 | orchestrator |  "vg_name": "ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75" 2025-09-19 06:58:34.440122 | orchestrator |  }, 2025-09-19 06:58:34.440133 | orchestrator |  { 2025-09-19 06:58:34.440144 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 06:58:34.440155 | orchestrator |  "vg_name": "ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0" 2025-09-19 06:58:34.440166 | orchestrator |  } 2025-09-19 06:58:34.440177 | orchestrator |  ] 2025-09-19 06:58:34.440188 | orchestrator |  } 2025-09-19 06:58:34.440199 | orchestrator | } 2025-09-19 06:58:34.440211 | orchestrator | 2025-09-19 06:58:34.440222 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 06:58:34.440233 | orchestrator | 2025-09-19 06:58:34.440247 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:58:34.440260 | orchestrator | Friday 19 September 2025 06:58:28 +0000 (0:00:00.390) 0:00:47.014 ****** 2025-09-19 06:58:34.440273 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 06:58:34.440287 | orchestrator | 2025-09-19 06:58:34.440314 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:58:34.440327 | orchestrator | Friday 19 September 2025 06:58:29 +0000 (0:00:00.242) 0:00:47.257 ****** 2025-09-19 06:58:34.440338 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:58:34.440349 | orchestrator | 2025-09-19 06:58:34.440361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.440372 | orchestrator | Friday 19 September 2025 06:58:29 +0000 (0:00:00.223) 0:00:47.480 ****** 2025-09-19 06:58:34.440384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 06:58:34.440395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 06:58:34.440406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 06:58:34.440417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 06:58:34.440428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 06:58:34.440439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 06:58:34.440470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 06:58:34.440483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 06:58:34.440494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 06:58:34.440505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 06:58:34.440516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 06:58:34.440536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 06:58:34.440547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 06:58:34.440558 | orchestrator | 2025-09-19 06:58:34.440569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.440580 | orchestrator | Friday 19 September 2025 06:58:29 +0000 (0:00:00.366) 0:00:47.847 ****** 2025-09-19 06:58:34.440590 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:34.440602 | orchestrator | 2025-09-19 06:58:34.440617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.440628 | orchestrator | Friday 19 September 2025 06:58:29 +0000 (0:00:00.185) 0:00:48.033 ****** 2025-09-19 06:58:34.440639 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:34.440650 | orchestrator | 2025-09-19 06:58:34.440661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.440688 | orchestrator | Friday 19 September 2025 06:58:30 +0000 (0:00:00.186) 0:00:48.219 ****** 2025-09-19 06:58:34.440700 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:34.440711 | orchestrator | 2025-09-19 06:58:34.440722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.440733 | orchestrator | Friday 19 September 2025 06:58:30 +0000 (0:00:00.174) 0:00:48.393 ****** 2025-09-19 06:58:34.440744 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:34.440755 | orchestrator | 2025-09-19 06:58:34.440766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.440777 | orchestrator | Friday 19 September 2025 06:58:30 +0000 (0:00:00.168) 0:00:48.562 ****** 2025-09-19 06:58:34.440788 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:34.440799 | orchestrator | 2025-09-19 06:58:34.440810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.440821 | orchestrator | Friday 19 September 2025 06:58:30 +0000 (0:00:00.191) 0:00:48.753 ****** 2025-09-19 06:58:34.440832 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:34.440843 | orchestrator | 2025-09-19 06:58:34.440854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.440865 | orchestrator | Friday 19 September 2025 06:58:31 +0000 (0:00:00.446) 0:00:49.199 ****** 2025-09-19 06:58:34.440876 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:34.440887 | orchestrator | 2025-09-19 06:58:34.440898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.440909 | orchestrator | Friday 19 September 2025 06:58:31 +0000 (0:00:00.185) 0:00:49.385 ****** 2025-09-19 06:58:34.440920 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:34.440931 | orchestrator | 2025-09-19 06:58:34.440942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.440953 | orchestrator | Friday 19 September 2025 06:58:31 +0000 (0:00:00.189) 0:00:49.574 ****** 2025-09-19 06:58:34.440964 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d) 2025-09-19 06:58:34.440976 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d) 2025-09-19 06:58:34.440987 | orchestrator | 2025-09-19 06:58:34.440998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.441009 | orchestrator | Friday 19 September 2025 06:58:31 +0000 (0:00:00.435) 0:00:50.010 ****** 2025-09-19 06:58:34.441020 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2) 2025-09-19 06:58:34.441031 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2) 2025-09-19 06:58:34.441042 | orchestrator | 2025-09-19 06:58:34.441053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.441064 | orchestrator | Friday 19 September 2025 06:58:32 +0000 (0:00:00.526) 0:00:50.536 ****** 2025-09-19 06:58:34.441080 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39) 2025-09-19 06:58:34.441098 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39) 2025-09-19 06:58:34.441109 | orchestrator | 2025-09-19 06:58:34.441120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.441131 | orchestrator | Friday 19 September 2025 06:58:32 +0000 (0:00:00.523) 0:00:51.060 ****** 2025-09-19 06:58:34.441142 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528) 2025-09-19 06:58:34.441153 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528) 2025-09-19 06:58:34.441164 | orchestrator | 2025-09-19 06:58:34.441175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:34.441186 | orchestrator | Friday 19 September 2025 06:58:33 +0000 (0:00:00.554) 0:00:51.614 ****** 2025-09-19 06:58:34.441197 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:58:34.441208 | orchestrator | 2025-09-19 06:58:34.441219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:34.441229 | orchestrator | Friday 19 September 2025 06:58:33 +0000 (0:00:00.442) 0:00:52.056 ****** 2025-09-19 06:58:34.441240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 06:58:34.441251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 06:58:34.441262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 06:58:34.441273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 06:58:34.441284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 06:58:34.441295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 06:58:34.441305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 06:58:34.441316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 06:58:34.441327 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 06:58:34.441338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 06:58:34.441349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 06:58:34.441365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 06:58:43.813224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 06:58:43.813317 | orchestrator | 2025-09-19 06:58:43.813333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813345 | orchestrator | Friday 19 September 2025 06:58:34 +0000 (0:00:00.508) 0:00:52.565 ****** 2025-09-19 06:58:43.813357 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.813369 | orchestrator | 2025-09-19 06:58:43.813380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813391 | orchestrator | Friday 19 September 2025 06:58:34 +0000 (0:00:00.220) 0:00:52.785 ****** 2025-09-19 06:58:43.813403 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.813414 | orchestrator | 2025-09-19 06:58:43.813425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813437 | orchestrator | Friday 19 September 2025 06:58:34 +0000 (0:00:00.265) 0:00:53.051 ****** 2025-09-19 06:58:43.813525 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.813547 | orchestrator | 2025-09-19 06:58:43.813565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813585 | orchestrator | Friday 19 September 2025 06:58:35 +0000 (0:00:01.024) 0:00:54.076 ****** 2025-09-19 06:58:43.813619 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.813631 | orchestrator | 2025-09-19 06:58:43.813642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813653 | orchestrator | Friday 19 September 2025 06:58:36 +0000 (0:00:00.236) 0:00:54.312 ****** 2025-09-19 06:58:43.813664 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.813675 | orchestrator | 2025-09-19 06:58:43.813686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813697 | orchestrator | Friday 19 September 2025 06:58:36 +0000 (0:00:00.213) 0:00:54.526 ****** 2025-09-19 06:58:43.813708 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.813719 | orchestrator | 2025-09-19 06:58:43.813730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813741 | orchestrator | Friday 19 September 2025 06:58:36 +0000 (0:00:00.219) 0:00:54.745 ****** 2025-09-19 06:58:43.813752 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.813763 | orchestrator | 2025-09-19 06:58:43.813774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813785 | orchestrator | Friday 19 September 2025 06:58:36 +0000 (0:00:00.217) 0:00:54.963 ****** 2025-09-19 06:58:43.813798 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.813811 | orchestrator | 2025-09-19 06:58:43.813824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813837 | orchestrator | Friday 19 September 2025 06:58:37 +0000 (0:00:00.211) 0:00:55.174 ****** 2025-09-19 06:58:43.813849 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 06:58:43.813861 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 06:58:43.813874 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 06:58:43.813887 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 06:58:43.813899 | orchestrator | 2025-09-19 06:58:43.813912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813924 | orchestrator | Friday 19 September 2025 06:58:37 +0000 (0:00:00.690) 0:00:55.865 ****** 2025-09-19 06:58:43.813937 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.813950 | orchestrator | 2025-09-19 06:58:43.813962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.813975 | orchestrator | Friday 19 September 2025 06:58:37 +0000 (0:00:00.218) 0:00:56.084 ****** 2025-09-19 06:58:43.813988 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814001 | orchestrator | 2025-09-19 06:58:43.814014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.814077 | orchestrator | Friday 19 September 2025 06:58:38 +0000 (0:00:00.212) 0:00:56.296 ****** 2025-09-19 06:58:43.814088 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814099 | orchestrator | 2025-09-19 06:58:43.814110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:43.814130 | orchestrator | Friday 19 September 2025 06:58:38 +0000 (0:00:00.201) 0:00:56.498 ****** 2025-09-19 06:58:43.814141 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814152 | orchestrator | 2025-09-19 06:58:43.814164 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 06:58:43.814175 | orchestrator | Friday 19 September 2025 06:58:38 +0000 (0:00:00.214) 0:00:56.712 ****** 2025-09-19 06:58:43.814186 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814197 | orchestrator | 2025-09-19 06:58:43.814208 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 06:58:43.814219 | orchestrator | Friday 19 September 2025 06:58:38 +0000 (0:00:00.357) 0:00:57.070 ****** 2025-09-19 06:58:43.814231 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2af2e838-b751-5a2f-ab09-cbc0dc745073'}}) 2025-09-19 06:58:43.814242 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '03228564-3151-5027-920d-737061be0eca'}}) 2025-09-19 06:58:43.814262 | orchestrator | 2025-09-19 06:58:43.814273 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 06:58:43.814284 | orchestrator | Friday 19 September 2025 06:58:39 +0000 (0:00:00.221) 0:00:57.291 ****** 2025-09-19 06:58:43.814296 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'}) 2025-09-19 06:58:43.814308 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'}) 2025-09-19 06:58:43.814320 | orchestrator | 2025-09-19 06:58:43.814331 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 06:58:43.814358 | orchestrator | Friday 19 September 2025 06:58:41 +0000 (0:00:01.867) 0:00:59.159 ****** 2025-09-19 06:58:43.814370 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:43.814382 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:43.814394 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814405 | orchestrator | 2025-09-19 06:58:43.814416 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 06:58:43.814427 | orchestrator | Friday 19 September 2025 06:58:41 +0000 (0:00:00.141) 0:00:59.300 ****** 2025-09-19 06:58:43.814443 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'}) 2025-09-19 06:58:43.814501 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'}) 2025-09-19 06:58:43.814523 | orchestrator | 2025-09-19 06:58:43.814543 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 06:58:43.814559 | orchestrator | Friday 19 September 2025 06:58:42 +0000 (0:00:01.275) 0:01:00.575 ****** 2025-09-19 06:58:43.814571 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:43.814582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:43.814593 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814604 | orchestrator | 2025-09-19 06:58:43.814616 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 06:58:43.814627 | orchestrator | Friday 19 September 2025 06:58:42 +0000 (0:00:00.152) 0:01:00.728 ****** 2025-09-19 06:58:43.814638 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814648 | orchestrator | 2025-09-19 06:58:43.814659 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 06:58:43.814671 | orchestrator | Friday 19 September 2025 06:58:42 +0000 (0:00:00.124) 0:01:00.852 ****** 2025-09-19 06:58:43.814682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:43.814698 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:43.814709 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814722 | orchestrator | 2025-09-19 06:58:43.814742 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 06:58:43.814761 | orchestrator | Friday 19 September 2025 06:58:42 +0000 (0:00:00.145) 0:01:00.998 ****** 2025-09-19 06:58:43.814778 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814795 | orchestrator | 2025-09-19 06:58:43.814811 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 06:58:43.814840 | orchestrator | Friday 19 September 2025 06:58:43 +0000 (0:00:00.145) 0:01:01.143 ****** 2025-09-19 06:58:43.814858 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:43.814875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:43.814894 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814913 | orchestrator | 2025-09-19 06:58:43.814932 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 06:58:43.814951 | orchestrator | Friday 19 September 2025 06:58:43 +0000 (0:00:00.155) 0:01:01.298 ****** 2025-09-19 06:58:43.814969 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.814989 | orchestrator | 2025-09-19 06:58:43.815008 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 06:58:43.815027 | orchestrator | Friday 19 September 2025 06:58:43 +0000 (0:00:00.124) 0:01:01.423 ****** 2025-09-19 06:58:43.815045 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:43.815064 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:43.815083 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:43.815102 | orchestrator | 2025-09-19 06:58:43.815120 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 06:58:43.815140 | orchestrator | Friday 19 September 2025 06:58:43 +0000 (0:00:00.135) 0:01:01.559 ****** 2025-09-19 06:58:43.815158 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:58:43.815177 | orchestrator | 2025-09-19 06:58:43.815198 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 06:58:43.815217 | orchestrator | Friday 19 September 2025 06:58:43 +0000 (0:00:00.119) 0:01:01.678 ****** 2025-09-19 06:58:43.815253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:49.268823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:49.268930 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.268946 | orchestrator | 2025-09-19 06:58:49.268959 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 06:58:49.268971 | orchestrator | Friday 19 September 2025 06:58:43 +0000 (0:00:00.271) 0:01:01.949 ****** 2025-09-19 06:58:49.268983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:49.268995 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:49.269006 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.269017 | orchestrator | 2025-09-19 06:58:49.269029 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 06:58:49.269040 | orchestrator | Friday 19 September 2025 06:58:43 +0000 (0:00:00.142) 0:01:02.092 ****** 2025-09-19 06:58:49.269052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:49.269063 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:49.269075 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.269086 | orchestrator | 2025-09-19 06:58:49.269121 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 06:58:49.269133 | orchestrator | Friday 19 September 2025 06:58:44 +0000 (0:00:00.140) 0:01:02.233 ****** 2025-09-19 06:58:49.269144 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.269173 | orchestrator | 2025-09-19 06:58:49.269196 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 06:58:49.269207 | orchestrator | Friday 19 September 2025 06:58:44 +0000 (0:00:00.120) 0:01:02.353 ****** 2025-09-19 06:58:49.269218 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.269229 | orchestrator | 2025-09-19 06:58:49.269240 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 06:58:49.269251 | orchestrator | Friday 19 September 2025 06:58:44 +0000 (0:00:00.135) 0:01:02.489 ****** 2025-09-19 06:58:49.269262 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.269273 | orchestrator | 2025-09-19 06:58:49.269285 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 06:58:49.269311 | orchestrator | Friday 19 September 2025 06:58:44 +0000 (0:00:00.127) 0:01:02.616 ****** 2025-09-19 06:58:49.269322 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 06:58:49.269334 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 06:58:49.269345 | orchestrator | } 2025-09-19 06:58:49.269357 | orchestrator | 2025-09-19 06:58:49.269371 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 06:58:49.269384 | orchestrator | Friday 19 September 2025 06:58:44 +0000 (0:00:00.143) 0:01:02.760 ****** 2025-09-19 06:58:49.269396 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 06:58:49.269409 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 06:58:49.269422 | orchestrator | } 2025-09-19 06:58:49.269435 | orchestrator | 2025-09-19 06:58:49.269465 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 06:58:49.269478 | orchestrator | Friday 19 September 2025 06:58:44 +0000 (0:00:00.137) 0:01:02.897 ****** 2025-09-19 06:58:49.269491 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 06:58:49.269503 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 06:58:49.269517 | orchestrator | } 2025-09-19 06:58:49.269530 | orchestrator | 2025-09-19 06:58:49.269542 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 06:58:49.269555 | orchestrator | Friday 19 September 2025 06:58:44 +0000 (0:00:00.133) 0:01:03.030 ****** 2025-09-19 06:58:49.269567 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:58:49.269580 | orchestrator | 2025-09-19 06:58:49.269592 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 06:58:49.269606 | orchestrator | Friday 19 September 2025 06:58:45 +0000 (0:00:00.480) 0:01:03.511 ****** 2025-09-19 06:58:49.269619 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:58:49.269632 | orchestrator | 2025-09-19 06:58:49.269644 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 06:58:49.269657 | orchestrator | Friday 19 September 2025 06:58:45 +0000 (0:00:00.488) 0:01:04.000 ****** 2025-09-19 06:58:49.269670 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:58:49.269683 | orchestrator | 2025-09-19 06:58:49.269695 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 06:58:49.269708 | orchestrator | Friday 19 September 2025 06:58:46 +0000 (0:00:00.487) 0:01:04.488 ****** 2025-09-19 06:58:49.269721 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:58:49.269732 | orchestrator | 2025-09-19 06:58:49.269743 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 06:58:49.269754 | orchestrator | Friday 19 September 2025 06:58:46 +0000 (0:00:00.263) 0:01:04.751 ****** 2025-09-19 06:58:49.269765 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.269776 | orchestrator | 2025-09-19 06:58:49.269787 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 06:58:49.269799 | orchestrator | Friday 19 September 2025 06:58:46 +0000 (0:00:00.104) 0:01:04.856 ****** 2025-09-19 06:58:49.269810 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.269829 | orchestrator | 2025-09-19 06:58:49.269840 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 06:58:49.269852 | orchestrator | Friday 19 September 2025 06:58:46 +0000 (0:00:00.104) 0:01:04.960 ****** 2025-09-19 06:58:49.269863 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 06:58:49.269874 | orchestrator |  "vgs_report": { 2025-09-19 06:58:49.269885 | orchestrator |  "vg": [] 2025-09-19 06:58:49.269913 | orchestrator |  } 2025-09-19 06:58:49.269925 | orchestrator | } 2025-09-19 06:58:49.269936 | orchestrator | 2025-09-19 06:58:49.269948 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 06:58:49.269959 | orchestrator | Friday 19 September 2025 06:58:46 +0000 (0:00:00.121) 0:01:05.082 ****** 2025-09-19 06:58:49.269970 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.269981 | orchestrator | 2025-09-19 06:58:49.269993 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 06:58:49.270004 | orchestrator | Friday 19 September 2025 06:58:47 +0000 (0:00:00.120) 0:01:05.202 ****** 2025-09-19 06:58:49.270067 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270083 | orchestrator | 2025-09-19 06:58:49.270094 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 06:58:49.270105 | orchestrator | Friday 19 September 2025 06:58:47 +0000 (0:00:00.128) 0:01:05.331 ****** 2025-09-19 06:58:49.270117 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270128 | orchestrator | 2025-09-19 06:58:49.270139 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 06:58:49.270150 | orchestrator | Friday 19 September 2025 06:58:47 +0000 (0:00:00.126) 0:01:05.458 ****** 2025-09-19 06:58:49.270161 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270172 | orchestrator | 2025-09-19 06:58:49.270183 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 06:58:49.270194 | orchestrator | Friday 19 September 2025 06:58:47 +0000 (0:00:00.122) 0:01:05.580 ****** 2025-09-19 06:58:49.270206 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270217 | orchestrator | 2025-09-19 06:58:49.270228 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 06:58:49.270239 | orchestrator | Friday 19 September 2025 06:58:47 +0000 (0:00:00.115) 0:01:05.695 ****** 2025-09-19 06:58:49.270250 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270261 | orchestrator | 2025-09-19 06:58:49.270272 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 06:58:49.270283 | orchestrator | Friday 19 September 2025 06:58:47 +0000 (0:00:00.137) 0:01:05.832 ****** 2025-09-19 06:58:49.270295 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270306 | orchestrator | 2025-09-19 06:58:49.270317 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 06:58:49.270328 | orchestrator | Friday 19 September 2025 06:58:47 +0000 (0:00:00.121) 0:01:05.954 ****** 2025-09-19 06:58:49.270339 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270350 | orchestrator | 2025-09-19 06:58:49.270361 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 06:58:49.270372 | orchestrator | Friday 19 September 2025 06:58:47 +0000 (0:00:00.117) 0:01:06.072 ****** 2025-09-19 06:58:49.270383 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270394 | orchestrator | 2025-09-19 06:58:49.270406 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 06:58:49.270417 | orchestrator | Friday 19 September 2025 06:58:48 +0000 (0:00:00.254) 0:01:06.326 ****** 2025-09-19 06:58:49.270434 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270464 | orchestrator | 2025-09-19 06:58:49.270476 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 06:58:49.270487 | orchestrator | Friday 19 September 2025 06:58:48 +0000 (0:00:00.128) 0:01:06.454 ****** 2025-09-19 06:58:49.270498 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270509 | orchestrator | 2025-09-19 06:58:49.270520 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 06:58:49.270539 | orchestrator | Friday 19 September 2025 06:58:48 +0000 (0:00:00.132) 0:01:06.587 ****** 2025-09-19 06:58:49.270550 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270561 | orchestrator | 2025-09-19 06:58:49.270573 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 06:58:49.270584 | orchestrator | Friday 19 September 2025 06:58:48 +0000 (0:00:00.138) 0:01:06.726 ****** 2025-09-19 06:58:49.270595 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270606 | orchestrator | 2025-09-19 06:58:49.270617 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 06:58:49.270628 | orchestrator | Friday 19 September 2025 06:58:48 +0000 (0:00:00.133) 0:01:06.859 ****** 2025-09-19 06:58:49.270639 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270650 | orchestrator | 2025-09-19 06:58:49.270661 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 06:58:49.270672 | orchestrator | Friday 19 September 2025 06:58:48 +0000 (0:00:00.126) 0:01:06.986 ****** 2025-09-19 06:58:49.270684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:49.270695 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:49.270706 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270717 | orchestrator | 2025-09-19 06:58:49.270728 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 06:58:49.270739 | orchestrator | Friday 19 September 2025 06:58:48 +0000 (0:00:00.144) 0:01:07.130 ****** 2025-09-19 06:58:49.270750 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:49.270762 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:49.270773 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:49.270784 | orchestrator | 2025-09-19 06:58:49.270795 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 06:58:49.270806 | orchestrator | Friday 19 September 2025 06:58:49 +0000 (0:00:00.138) 0:01:07.269 ****** 2025-09-19 06:58:49.270825 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:52.044142 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:52.044241 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:52.044257 | orchestrator | 2025-09-19 06:58:52.044270 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 06:58:52.044283 | orchestrator | Friday 19 September 2025 06:58:49 +0000 (0:00:00.136) 0:01:07.406 ****** 2025-09-19 06:58:52.044295 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:52.044307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:52.044318 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:52.044330 | orchestrator | 2025-09-19 06:58:52.044341 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 06:58:52.044352 | orchestrator | Friday 19 September 2025 06:58:49 +0000 (0:00:00.144) 0:01:07.550 ****** 2025-09-19 06:58:52.044364 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:52.044402 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:52.044414 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:52.044425 | orchestrator | 2025-09-19 06:58:52.044437 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 06:58:52.044497 | orchestrator | Friday 19 September 2025 06:58:49 +0000 (0:00:00.147) 0:01:07.697 ****** 2025-09-19 06:58:52.044510 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:52.044521 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:52.044533 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:52.044544 | orchestrator | 2025-09-19 06:58:52.044556 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 06:58:52.044567 | orchestrator | Friday 19 September 2025 06:58:49 +0000 (0:00:00.144) 0:01:07.841 ****** 2025-09-19 06:58:52.044579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:52.044590 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:52.044602 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:52.044613 | orchestrator | 2025-09-19 06:58:52.044625 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 06:58:52.044637 | orchestrator | Friday 19 September 2025 06:58:49 +0000 (0:00:00.273) 0:01:08.115 ****** 2025-09-19 06:58:52.044648 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:52.044660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:52.044672 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:52.044686 | orchestrator | 2025-09-19 06:58:52.044699 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 06:58:52.044712 | orchestrator | Friday 19 September 2025 06:58:50 +0000 (0:00:00.145) 0:01:08.260 ****** 2025-09-19 06:58:52.044724 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:58:52.044739 | orchestrator | 2025-09-19 06:58:52.044752 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 06:58:52.044765 | orchestrator | Friday 19 September 2025 06:58:50 +0000 (0:00:00.520) 0:01:08.780 ****** 2025-09-19 06:58:52.044778 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:58:52.044791 | orchestrator | 2025-09-19 06:58:52.044804 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 06:58:52.044817 | orchestrator | Friday 19 September 2025 06:58:51 +0000 (0:00:00.494) 0:01:09.275 ****** 2025-09-19 06:58:52.044829 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:58:52.044843 | orchestrator | 2025-09-19 06:58:52.044855 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 06:58:52.044868 | orchestrator | Friday 19 September 2025 06:58:51 +0000 (0:00:00.137) 0:01:09.412 ****** 2025-09-19 06:58:52.044881 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'vg_name': 'ceph-03228564-3151-5027-920d-737061be0eca'}) 2025-09-19 06:58:52.044895 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'vg_name': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'}) 2025-09-19 06:58:52.044908 | orchestrator | 2025-09-19 06:58:52.044921 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 06:58:52.044943 | orchestrator | Friday 19 September 2025 06:58:51 +0000 (0:00:00.160) 0:01:09.572 ****** 2025-09-19 06:58:52.044972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:52.044986 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:52.044999 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:52.045012 | orchestrator | 2025-09-19 06:58:52.045026 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 06:58:52.045038 | orchestrator | Friday 19 September 2025 06:58:51 +0000 (0:00:00.150) 0:01:09.722 ****** 2025-09-19 06:58:52.045050 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:52.045061 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:52.045072 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:52.045084 | orchestrator | 2025-09-19 06:58:52.045095 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 06:58:52.045107 | orchestrator | Friday 19 September 2025 06:58:51 +0000 (0:00:00.143) 0:01:09.866 ****** 2025-09-19 06:58:52.045118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'})  2025-09-19 06:58:52.045163 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'})  2025-09-19 06:58:52.045182 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:58:52.045201 | orchestrator | 2025-09-19 06:58:52.045220 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 06:58:52.045238 | orchestrator | Friday 19 September 2025 06:58:51 +0000 (0:00:00.148) 0:01:10.015 ****** 2025-09-19 06:58:52.045256 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 06:58:52.045274 | orchestrator |  "lvm_report": { 2025-09-19 06:58:52.045292 | orchestrator |  "lv": [ 2025-09-19 06:58:52.045308 | orchestrator |  { 2025-09-19 06:58:52.045325 | orchestrator |  "lv_name": "osd-block-03228564-3151-5027-920d-737061be0eca", 2025-09-19 06:58:52.045344 | orchestrator |  "vg_name": "ceph-03228564-3151-5027-920d-737061be0eca" 2025-09-19 06:58:52.045363 | orchestrator |  }, 2025-09-19 06:58:52.045388 | orchestrator |  { 2025-09-19 06:58:52.045407 | orchestrator |  "lv_name": "osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073", 2025-09-19 06:58:52.045427 | orchestrator |  "vg_name": "ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073" 2025-09-19 06:58:52.045484 | orchestrator |  } 2025-09-19 06:58:52.045499 | orchestrator |  ], 2025-09-19 06:58:52.045511 | orchestrator |  "pv": [ 2025-09-19 06:58:52.045522 | orchestrator |  { 2025-09-19 06:58:52.045533 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 06:58:52.045545 | orchestrator |  "vg_name": "ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073" 2025-09-19 06:58:52.045556 | orchestrator |  }, 2025-09-19 06:58:52.045567 | orchestrator |  { 2025-09-19 06:58:52.045578 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 06:58:52.045589 | orchestrator |  "vg_name": "ceph-03228564-3151-5027-920d-737061be0eca" 2025-09-19 06:58:52.045601 | orchestrator |  } 2025-09-19 06:58:52.045612 | orchestrator |  ] 2025-09-19 06:58:52.045623 | orchestrator |  } 2025-09-19 06:58:52.045635 | orchestrator | } 2025-09-19 06:58:52.045646 | orchestrator | 2025-09-19 06:58:52.045657 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:58:52.045669 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 06:58:52.045690 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 06:58:52.045702 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 06:58:52.045713 | orchestrator | 2025-09-19 06:58:52.045724 | orchestrator | 2025-09-19 06:58:52.045735 | orchestrator | 2025-09-19 06:58:52.045747 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:58:52.045758 | orchestrator | Friday 19 September 2025 06:58:52 +0000 (0:00:00.142) 0:01:10.157 ****** 2025-09-19 06:58:52.045769 | orchestrator | =============================================================================== 2025-09-19 06:58:52.045780 | orchestrator | Create block VGs -------------------------------------------------------- 5.54s 2025-09-19 06:58:52.045792 | orchestrator | Create block LVs -------------------------------------------------------- 4.00s 2025-09-19 06:58:52.045803 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.87s 2025-09-19 06:58:52.045814 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.57s 2025-09-19 06:58:52.045825 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2025-09-19 06:58:52.045836 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2025-09-19 06:58:52.045847 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.48s 2025-09-19 06:58:52.045859 | orchestrator | Add known partitions to the list of available block devices ------------- 1.46s 2025-09-19 06:58:52.045880 | orchestrator | Add known links to the list of available block devices ------------------ 1.25s 2025-09-19 06:58:52.279300 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2025-09-19 06:58:52.279388 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2025-09-19 06:58:52.279402 | orchestrator | Print LVM report data --------------------------------------------------- 0.85s 2025-09-19 06:58:52.279414 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-09-19 06:58:52.279425 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-09-19 06:58:52.279436 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2025-09-19 06:58:52.279498 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-09-19 06:58:52.279510 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.65s 2025-09-19 06:58:52.279521 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.65s 2025-09-19 06:58:52.279533 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.62s 2025-09-19 06:58:52.279545 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.62s 2025-09-19 06:59:04.266653 | orchestrator | 2025-09-19 06:59:04 | INFO  | Task 24f0282e-953a-4341-99bb-4c9654f2844c (facts) was prepared for execution. 2025-09-19 06:59:04.266738 | orchestrator | 2025-09-19 06:59:04 | INFO  | It takes a moment until task 24f0282e-953a-4341-99bb-4c9654f2844c (facts) has been started and output is visible here. 2025-09-19 06:59:17.754952 | orchestrator | 2025-09-19 06:59:17.755064 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 06:59:17.755082 | orchestrator | 2025-09-19 06:59:17.755094 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 06:59:17.755106 | orchestrator | Friday 19 September 2025 06:59:08 +0000 (0:00:00.276) 0:00:00.276 ****** 2025-09-19 06:59:17.755118 | orchestrator | ok: [testbed-manager] 2025-09-19 06:59:17.755131 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:59:17.755142 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:59:17.755179 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:59:17.755191 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:59:17.755202 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:17.755213 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:59:17.755224 | orchestrator | 2025-09-19 06:59:17.755236 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 06:59:17.755247 | orchestrator | Friday 19 September 2025 06:59:09 +0000 (0:00:01.159) 0:00:01.435 ****** 2025-09-19 06:59:17.755258 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:59:17.755286 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:59:17.755298 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:59:17.755310 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:59:17.755321 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:17.755333 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:17.755344 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:17.755355 | orchestrator | 2025-09-19 06:59:17.755366 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 06:59:17.755378 | orchestrator | 2025-09-19 06:59:17.755389 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:59:17.755400 | orchestrator | Friday 19 September 2025 06:59:10 +0000 (0:00:01.272) 0:00:02.707 ****** 2025-09-19 06:59:17.755411 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:59:17.755422 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:59:17.755491 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:59:17.755506 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:59:17.755518 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:17.755530 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:59:17.755543 | orchestrator | ok: [testbed-manager] 2025-09-19 06:59:17.755557 | orchestrator | 2025-09-19 06:59:17.755571 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 06:59:17.755583 | orchestrator | 2025-09-19 06:59:17.755596 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 06:59:17.755607 | orchestrator | Friday 19 September 2025 06:59:16 +0000 (0:00:06.198) 0:00:08.906 ****** 2025-09-19 06:59:17.755618 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:59:17.755630 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:59:17.755641 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:59:17.755652 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:59:17.755663 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:17.755674 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:17.755685 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:17.755697 | orchestrator | 2025-09-19 06:59:17.755708 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:59:17.755719 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:59:17.755732 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:59:17.755743 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:59:17.755755 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:59:17.755766 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:59:17.755777 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:59:17.755788 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:59:17.755799 | orchestrator | 2025-09-19 06:59:17.755811 | orchestrator | 2025-09-19 06:59:17.755831 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:59:17.755843 | orchestrator | Friday 19 September 2025 06:59:17 +0000 (0:00:00.514) 0:00:09.420 ****** 2025-09-19 06:59:17.755854 | orchestrator | =============================================================================== 2025-09-19 06:59:17.755865 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.20s 2025-09-19 06:59:17.755876 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-09-19 06:59:17.755887 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.16s 2025-09-19 06:59:17.755899 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-09-19 06:59:29.728978 | orchestrator | 2025-09-19 06:59:29 | INFO  | Task ae605183-7077-4c78-bdaf-82dd34820ab9 (frr) was prepared for execution. 2025-09-19 06:59:29.729089 | orchestrator | 2025-09-19 06:59:29 | INFO  | It takes a moment until task ae605183-7077-4c78-bdaf-82dd34820ab9 (frr) has been started and output is visible here. 2025-09-19 06:59:53.708070 | orchestrator | 2025-09-19 06:59:53.708200 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-19 06:59:53.708218 | orchestrator | 2025-09-19 06:59:53.708243 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-19 06:59:53.708256 | orchestrator | Friday 19 September 2025 06:59:33 +0000 (0:00:00.236) 0:00:00.236 ****** 2025-09-19 06:59:53.708268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 06:59:53.708281 | orchestrator | 2025-09-19 06:59:53.708292 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-19 06:59:53.708304 | orchestrator | Friday 19 September 2025 06:59:33 +0000 (0:00:00.221) 0:00:00.458 ****** 2025-09-19 06:59:53.708315 | orchestrator | changed: [testbed-manager] 2025-09-19 06:59:53.708327 | orchestrator | 2025-09-19 06:59:53.708339 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-19 06:59:53.708350 | orchestrator | Friday 19 September 2025 06:59:35 +0000 (0:00:01.209) 0:00:01.668 ****** 2025-09-19 06:59:53.708361 | orchestrator | changed: [testbed-manager] 2025-09-19 06:59:53.708373 | orchestrator | 2025-09-19 06:59:53.708384 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-19 06:59:53.708412 | orchestrator | Friday 19 September 2025 06:59:44 +0000 (0:00:08.923) 0:00:10.592 ****** 2025-09-19 06:59:53.708469 | orchestrator | ok: [testbed-manager] 2025-09-19 06:59:53.708482 | orchestrator | 2025-09-19 06:59:53.708493 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-19 06:59:53.708505 | orchestrator | Friday 19 September 2025 06:59:45 +0000 (0:00:01.188) 0:00:11.780 ****** 2025-09-19 06:59:53.708516 | orchestrator | changed: [testbed-manager] 2025-09-19 06:59:53.708527 | orchestrator | 2025-09-19 06:59:53.708539 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-19 06:59:53.708550 | orchestrator | Friday 19 September 2025 06:59:46 +0000 (0:00:00.852) 0:00:12.633 ****** 2025-09-19 06:59:53.708561 | orchestrator | ok: [testbed-manager] 2025-09-19 06:59:53.708572 | orchestrator | 2025-09-19 06:59:53.708584 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-19 06:59:53.708595 | orchestrator | Friday 19 September 2025 06:59:47 +0000 (0:00:01.076) 0:00:13.709 ****** 2025-09-19 06:59:53.708609 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 06:59:53.708622 | orchestrator | 2025-09-19 06:59:53.708635 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-19 06:59:53.708647 | orchestrator | Friday 19 September 2025 06:59:47 +0000 (0:00:00.788) 0:00:14.498 ****** 2025-09-19 06:59:53.708660 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:59:53.708672 | orchestrator | 2025-09-19 06:59:53.708685 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-19 06:59:53.708721 | orchestrator | Friday 19 September 2025 06:59:48 +0000 (0:00:00.146) 0:00:14.644 ****** 2025-09-19 06:59:53.708735 | orchestrator | changed: [testbed-manager] 2025-09-19 06:59:53.708747 | orchestrator | 2025-09-19 06:59:53.708760 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-19 06:59:53.708773 | orchestrator | Friday 19 September 2025 06:59:49 +0000 (0:00:00.876) 0:00:15.521 ****** 2025-09-19 06:59:53.708786 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-19 06:59:53.708797 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-19 06:59:53.708809 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-19 06:59:53.708821 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-19 06:59:53.708832 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-19 06:59:53.708843 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-19 06:59:53.708854 | orchestrator | 2025-09-19 06:59:53.708865 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-19 06:59:53.708877 | orchestrator | Friday 19 September 2025 06:59:50 +0000 (0:00:01.951) 0:00:17.473 ****** 2025-09-19 06:59:53.708888 | orchestrator | ok: [testbed-manager] 2025-09-19 06:59:53.708899 | orchestrator | 2025-09-19 06:59:53.708910 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-19 06:59:53.708921 | orchestrator | Friday 19 September 2025 06:59:52 +0000 (0:00:01.212) 0:00:18.685 ****** 2025-09-19 06:59:53.708933 | orchestrator | changed: [testbed-manager] 2025-09-19 06:59:53.708944 | orchestrator | 2025-09-19 06:59:53.708955 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:59:53.708966 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:59:53.708978 | orchestrator | 2025-09-19 06:59:53.708989 | orchestrator | 2025-09-19 06:59:53.709000 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:59:53.709011 | orchestrator | Friday 19 September 2025 06:59:53 +0000 (0:00:01.347) 0:00:20.032 ****** 2025-09-19 06:59:53.709022 | orchestrator | =============================================================================== 2025-09-19 06:59:53.709034 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.92s 2025-09-19 06:59:53.709045 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.95s 2025-09-19 06:59:53.709056 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.35s 2025-09-19 06:59:53.709067 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.21s 2025-09-19 06:59:53.709097 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.21s 2025-09-19 06:59:53.709109 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.19s 2025-09-19 06:59:53.709120 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.08s 2025-09-19 06:59:53.709131 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.88s 2025-09-19 06:59:53.709142 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.85s 2025-09-19 06:59:53.709153 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.79s 2025-09-19 06:59:53.709164 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-09-19 06:59:53.709176 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.15s 2025-09-19 06:59:53.905683 | orchestrator | 2025-09-19 06:59:53.907907 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Sep 19 06:59:53 UTC 2025 2025-09-19 06:59:53.907965 | orchestrator | 2025-09-19 06:59:55.553614 | orchestrator | 2025-09-19 06:59:55 | INFO  | Collection nutshell is prepared for execution 2025-09-19 06:59:55.553769 | orchestrator | 2025-09-19 06:59:55 | INFO  | D [0] - dotfiles 2025-09-19 07:00:05.597768 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [0] - homer 2025-09-19 07:00:05.597871 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [0] - netdata 2025-09-19 07:00:05.597887 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [0] - openstackclient 2025-09-19 07:00:05.597899 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [0] - phpmyadmin 2025-09-19 07:00:05.597911 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [0] - common 2025-09-19 07:00:05.601813 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [1] -- loadbalancer 2025-09-19 07:00:05.601844 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [2] --- opensearch 2025-09-19 07:00:05.602193 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [2] --- mariadb-ng 2025-09-19 07:00:05.602216 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [3] ---- horizon 2025-09-19 07:00:05.602603 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [3] ---- keystone 2025-09-19 07:00:05.602625 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [4] ----- neutron 2025-09-19 07:00:05.602887 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [5] ------ wait-for-nova 2025-09-19 07:00:05.602910 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [5] ------ octavia 2025-09-19 07:00:05.604377 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [4] ----- barbican 2025-09-19 07:00:05.604500 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [4] ----- designate 2025-09-19 07:00:05.604517 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [4] ----- ironic 2025-09-19 07:00:05.604536 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [4] ----- placement 2025-09-19 07:00:05.604622 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [4] ----- magnum 2025-09-19 07:00:05.605463 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [1] -- openvswitch 2025-09-19 07:00:05.605683 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [2] --- ovn 2025-09-19 07:00:05.606132 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [1] -- memcached 2025-09-19 07:00:05.606158 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [1] -- redis 2025-09-19 07:00:05.606170 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [1] -- rabbitmq-ng 2025-09-19 07:00:05.606499 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [0] - kubernetes 2025-09-19 07:00:05.608590 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [1] -- kubeconfig 2025-09-19 07:00:05.608619 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [1] -- copy-kubeconfig 2025-09-19 07:00:05.608844 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [0] - ceph 2025-09-19 07:00:05.610847 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [1] -- ceph-pools 2025-09-19 07:00:05.610882 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [2] --- copy-ceph-keys 2025-09-19 07:00:05.610896 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [3] ---- cephclient 2025-09-19 07:00:05.611155 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-19 07:00:05.611178 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [4] ----- wait-for-keystone 2025-09-19 07:00:05.611460 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-19 07:00:05.611481 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [5] ------ glance 2025-09-19 07:00:05.611493 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [5] ------ cinder 2025-09-19 07:00:05.611622 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [5] ------ nova 2025-09-19 07:00:05.611869 | orchestrator | 2025-09-19 07:00:05 | INFO  | A [4] ----- prometheus 2025-09-19 07:00:05.612004 | orchestrator | 2025-09-19 07:00:05 | INFO  | D [5] ------ grafana 2025-09-19 07:00:05.811124 | orchestrator | 2025-09-19 07:00:05 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-19 07:00:05.814266 | orchestrator | 2025-09-19 07:00:05 | INFO  | Tasks are running in the background 2025-09-19 07:00:08.374536 | orchestrator | 2025-09-19 07:00:08 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-19 07:00:10.504479 | orchestrator | 2025-09-19 07:00:10 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:10.505498 | orchestrator | 2025-09-19 07:00:10 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:10.506123 | orchestrator | 2025-09-19 07:00:10 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:10.508549 | orchestrator | 2025-09-19 07:00:10 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:10.509084 | orchestrator | 2025-09-19 07:00:10 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:10.509706 | orchestrator | 2025-09-19 07:00:10 | INFO  | Task 6f1cd550-1611-4aba-8ec6-7ae21fffdd8d is in state STARTED 2025-09-19 07:00:10.510256 | orchestrator | 2025-09-19 07:00:10 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:10.510384 | orchestrator | 2025-09-19 07:00:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:13.578385 | orchestrator | 2025-09-19 07:00:13 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:13.582706 | orchestrator | 2025-09-19 07:00:13 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:13.583128 | orchestrator | 2025-09-19 07:00:13 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:13.583965 | orchestrator | 2025-09-19 07:00:13 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:13.584756 | orchestrator | 2025-09-19 07:00:13 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:13.586495 | orchestrator | 2025-09-19 07:00:13 | INFO  | Task 6f1cd550-1611-4aba-8ec6-7ae21fffdd8d is in state STARTED 2025-09-19 07:00:13.587303 | orchestrator | 2025-09-19 07:00:13 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:13.587367 | orchestrator | 2025-09-19 07:00:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:16.622848 | orchestrator | 2025-09-19 07:00:16 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:16.623082 | orchestrator | 2025-09-19 07:00:16 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:16.627181 | orchestrator | 2025-09-19 07:00:16 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:16.627805 | orchestrator | 2025-09-19 07:00:16 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:16.634708 | orchestrator | 2025-09-19 07:00:16 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:16.638598 | orchestrator | 2025-09-19 07:00:16 | INFO  | Task 6f1cd550-1611-4aba-8ec6-7ae21fffdd8d is in state STARTED 2025-09-19 07:00:16.639223 | orchestrator | 2025-09-19 07:00:16 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:16.639247 | orchestrator | 2025-09-19 07:00:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:19.936861 | orchestrator | 2025-09-19 07:00:19 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:19.936984 | orchestrator | 2025-09-19 07:00:19 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:19.937002 | orchestrator | 2025-09-19 07:00:19 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:19.939765 | orchestrator | 2025-09-19 07:00:19 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:19.942217 | orchestrator | 2025-09-19 07:00:19 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:19.942256 | orchestrator | 2025-09-19 07:00:19 | INFO  | Task 6f1cd550-1611-4aba-8ec6-7ae21fffdd8d is in state STARTED 2025-09-19 07:00:19.942270 | orchestrator | 2025-09-19 07:00:19 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:19.942282 | orchestrator | 2025-09-19 07:00:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:23.003404 | orchestrator | 2025-09-19 07:00:23 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:23.003516 | orchestrator | 2025-09-19 07:00:23 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:23.004038 | orchestrator | 2025-09-19 07:00:23 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:23.004827 | orchestrator | 2025-09-19 07:00:23 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:23.007623 | orchestrator | 2025-09-19 07:00:23 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:23.008474 | orchestrator | 2025-09-19 07:00:23 | INFO  | Task 6f1cd550-1611-4aba-8ec6-7ae21fffdd8d is in state STARTED 2025-09-19 07:00:23.015151 | orchestrator | 2025-09-19 07:00:23 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:23.015207 | orchestrator | 2025-09-19 07:00:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:26.206890 | orchestrator | 2025-09-19 07:00:26 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:26.207008 | orchestrator | 2025-09-19 07:00:26 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:26.207026 | orchestrator | 2025-09-19 07:00:26 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:26.207039 | orchestrator | 2025-09-19 07:00:26 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:26.207051 | orchestrator | 2025-09-19 07:00:26 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:26.207062 | orchestrator | 2025-09-19 07:00:26 | INFO  | Task 6f1cd550-1611-4aba-8ec6-7ae21fffdd8d is in state STARTED 2025-09-19 07:00:26.207073 | orchestrator | 2025-09-19 07:00:26 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:26.207085 | orchestrator | 2025-09-19 07:00:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:29.276310 | orchestrator | 2025-09-19 07:00:29 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:29.276485 | orchestrator | 2025-09-19 07:00:29 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:29.276502 | orchestrator | 2025-09-19 07:00:29 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:29.276515 | orchestrator | 2025-09-19 07:00:29 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:29.276554 | orchestrator | 2025-09-19 07:00:29 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:29.276566 | orchestrator | 2025-09-19 07:00:29 | INFO  | Task 6f1cd550-1611-4aba-8ec6-7ae21fffdd8d is in state STARTED 2025-09-19 07:00:29.276578 | orchestrator | 2025-09-19 07:00:29 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:29.276589 | orchestrator | 2025-09-19 07:00:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:32.246809 | orchestrator | 2025-09-19 07:00:32 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:32.246908 | orchestrator | 2025-09-19 07:00:32 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:32.249726 | orchestrator | 2025-09-19 07:00:32 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:32.258544 | orchestrator | 2025-09-19 07:00:32 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:32.262576 | orchestrator | 2025-09-19 07:00:32 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:32.263903 | orchestrator | 2025-09-19 07:00:32.263930 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-19 07:00:32.263941 | orchestrator | 2025-09-19 07:00:32.263952 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-19 07:00:32.263962 | orchestrator | Friday 19 September 2025 07:00:18 +0000 (0:00:01.165) 0:00:01.165 ****** 2025-09-19 07:00:32.263972 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:00:32.263983 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:00:32.263993 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:00:32.264003 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:00:32.264013 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:00:32.264023 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:00:32.264033 | orchestrator | changed: [testbed-manager] 2025-09-19 07:00:32.264042 | orchestrator | 2025-09-19 07:00:32.264052 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-19 07:00:32.264063 | orchestrator | Friday 19 September 2025 07:00:22 +0000 (0:00:03.540) 0:00:04.706 ****** 2025-09-19 07:00:32.264073 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 07:00:32.264083 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 07:00:32.264093 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 07:00:32.264103 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 07:00:32.264113 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 07:00:32.264123 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 07:00:32.264133 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 07:00:32.264143 | orchestrator | 2025-09-19 07:00:32.264154 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-19 07:00:32.264164 | orchestrator | Friday 19 September 2025 07:00:24 +0000 (0:00:02.225) 0:00:06.932 ****** 2025-09-19 07:00:32.264185 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:00:23.265951', 'end': '2025-09-19 07:00:23.273676', 'delta': '0:00:00.007725', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:00:32.264223 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:00:22.983025', 'end': '2025-09-19 07:00:22.988048', 'delta': '0:00:00.005023', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:00:32.264235 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:00:23.939679', 'end': '2025-09-19 07:00:23.944861', 'delta': '0:00:00.005182', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:00:32.264258 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:00:23.610101', 'end': '2025-09-19 07:00:23.620499', 'delta': '0:00:00.010398', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:00:32.264269 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:00:24.240376', 'end': '2025-09-19 07:00:24.249686', 'delta': '0:00:00.009310', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:00:32.264284 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:00:24.118412', 'end': '2025-09-19 07:00:24.127877', 'delta': '0:00:00.009465', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:00:32.264308 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:00:23.626351', 'end': '2025-09-19 07:00:23.635471', 'delta': '0:00:00.009120', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:00:32.264319 | orchestrator | 2025-09-19 07:00:32.264329 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-19 07:00:32.264339 | orchestrator | Friday 19 September 2025 07:00:26 +0000 (0:00:01.883) 0:00:08.816 ****** 2025-09-19 07:00:32.264349 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 07:00:32.264359 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 07:00:32.264369 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 07:00:32.264379 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 07:00:32.264388 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 07:00:32.264398 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 07:00:32.264436 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 07:00:32.264447 | orchestrator | 2025-09-19 07:00:32.264459 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-19 07:00:32.264477 | orchestrator | Friday 19 September 2025 07:00:27 +0000 (0:00:01.248) 0:00:10.064 ****** 2025-09-19 07:00:32.264493 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-19 07:00:32.264510 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 07:00:32.264527 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 07:00:32.264544 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 07:00:32.264561 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 07:00:32.264574 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 07:00:32.264586 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 07:00:32.264597 | orchestrator | 2025-09-19 07:00:32.264608 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:00:32.264628 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:00:32.264642 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:00:32.264654 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:00:32.264665 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:00:32.264677 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:00:32.264689 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:00:32.264700 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:00:32.264723 | orchestrator | 2025-09-19 07:00:32.264734 | orchestrator | 2025-09-19 07:00:32.264746 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:00:32.264757 | orchestrator | Friday 19 September 2025 07:00:30 +0000 (0:00:02.918) 0:00:12.983 ****** 2025-09-19 07:00:32.264768 | orchestrator | =============================================================================== 2025-09-19 07:00:32.264779 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.54s 2025-09-19 07:00:32.264791 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.92s 2025-09-19 07:00:32.264802 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.23s 2025-09-19 07:00:32.264813 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.88s 2025-09-19 07:00:32.264825 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.25s 2025-09-19 07:00:32.264837 | orchestrator | 2025-09-19 07:00:32 | INFO  | Task 6f1cd550-1611-4aba-8ec6-7ae21fffdd8d is in state SUCCESS 2025-09-19 07:00:32.266761 | orchestrator | 2025-09-19 07:00:32 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:32.266786 | orchestrator | 2025-09-19 07:00:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:35.320714 | orchestrator | 2025-09-19 07:00:35 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:35.320828 | orchestrator | 2025-09-19 07:00:35 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:00:35.320853 | orchestrator | 2025-09-19 07:00:35 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:35.322521 | orchestrator | 2025-09-19 07:00:35 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:35.322609 | orchestrator | 2025-09-19 07:00:35 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:35.323948 | orchestrator | 2025-09-19 07:00:35 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:35.323985 | orchestrator | 2025-09-19 07:00:35 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:35.323998 | orchestrator | 2025-09-19 07:00:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:38.364340 | orchestrator | 2025-09-19 07:00:38 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:38.364479 | orchestrator | 2025-09-19 07:00:38 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:00:38.364496 | orchestrator | 2025-09-19 07:00:38 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:38.365119 | orchestrator | 2025-09-19 07:00:38 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:38.365638 | orchestrator | 2025-09-19 07:00:38 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:38.367279 | orchestrator | 2025-09-19 07:00:38 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:38.367303 | orchestrator | 2025-09-19 07:00:38 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:38.367315 | orchestrator | 2025-09-19 07:00:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:41.447250 | orchestrator | 2025-09-19 07:00:41 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:41.448768 | orchestrator | 2025-09-19 07:00:41 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:00:41.449588 | orchestrator | 2025-09-19 07:00:41 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:41.450461 | orchestrator | 2025-09-19 07:00:41 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:41.451910 | orchestrator | 2025-09-19 07:00:41 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:41.452979 | orchestrator | 2025-09-19 07:00:41 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:41.454478 | orchestrator | 2025-09-19 07:00:41 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:41.454513 | orchestrator | 2025-09-19 07:00:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:44.483869 | orchestrator | 2025-09-19 07:00:44 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:44.484485 | orchestrator | 2025-09-19 07:00:44 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:00:44.485040 | orchestrator | 2025-09-19 07:00:44 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:44.485700 | orchestrator | 2025-09-19 07:00:44 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:44.487379 | orchestrator | 2025-09-19 07:00:44 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:44.490557 | orchestrator | 2025-09-19 07:00:44 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:44.493118 | orchestrator | 2025-09-19 07:00:44 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:44.493150 | orchestrator | 2025-09-19 07:00:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:47.569211 | orchestrator | 2025-09-19 07:00:47 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:47.572693 | orchestrator | 2025-09-19 07:00:47 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:00:47.580993 | orchestrator | 2025-09-19 07:00:47 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:47.583579 | orchestrator | 2025-09-19 07:00:47 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:47.604093 | orchestrator | 2025-09-19 07:00:47 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:47.613060 | orchestrator | 2025-09-19 07:00:47 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:47.613095 | orchestrator | 2025-09-19 07:00:47 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:47.613108 | orchestrator | 2025-09-19 07:00:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:50.720678 | orchestrator | 2025-09-19 07:00:50 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:50.723013 | orchestrator | 2025-09-19 07:00:50 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:00:50.727439 | orchestrator | 2025-09-19 07:00:50 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:50.727468 | orchestrator | 2025-09-19 07:00:50 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:50.732086 | orchestrator | 2025-09-19 07:00:50 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:50.734184 | orchestrator | 2025-09-19 07:00:50 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:50.734238 | orchestrator | 2025-09-19 07:00:50 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:50.734251 | orchestrator | 2025-09-19 07:00:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:54.046789 | orchestrator | 2025-09-19 07:00:53 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state STARTED 2025-09-19 07:00:54.046914 | orchestrator | 2025-09-19 07:00:53 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:00:54.046932 | orchestrator | 2025-09-19 07:00:53 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:54.046944 | orchestrator | 2025-09-19 07:00:53 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:54.046956 | orchestrator | 2025-09-19 07:00:53 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:54.046967 | orchestrator | 2025-09-19 07:00:53 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:54.046979 | orchestrator | 2025-09-19 07:00:53 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:54.046991 | orchestrator | 2025-09-19 07:00:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:00:57.149753 | orchestrator | 2025-09-19 07:00:57 | INFO  | Task e5423dca-74c9-4d43-a5a9-44aa1bcb3f7f is in state SUCCESS 2025-09-19 07:00:57.149849 | orchestrator | 2025-09-19 07:00:57 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:00:57.149863 | orchestrator | 2025-09-19 07:00:57 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:00:57.149874 | orchestrator | 2025-09-19 07:00:57 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:00:57.150263 | orchestrator | 2025-09-19 07:00:57 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:00:57.150288 | orchestrator | 2025-09-19 07:00:57 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:00:57.150305 | orchestrator | 2025-09-19 07:00:57 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:00:57.150322 | orchestrator | 2025-09-19 07:00:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:00.241233 | orchestrator | 2025-09-19 07:01:00 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:00.251358 | orchestrator | 2025-09-19 07:01:00 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:00.254934 | orchestrator | 2025-09-19 07:01:00 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:00.258765 | orchestrator | 2025-09-19 07:01:00 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:00.261316 | orchestrator | 2025-09-19 07:01:00 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:00.266058 | orchestrator | 2025-09-19 07:01:00 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:01:00.266070 | orchestrator | 2025-09-19 07:01:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:03.339650 | orchestrator | 2025-09-19 07:01:03 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:03.339729 | orchestrator | 2025-09-19 07:01:03 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:03.339739 | orchestrator | 2025-09-19 07:01:03 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:03.339748 | orchestrator | 2025-09-19 07:01:03 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:03.339780 | orchestrator | 2025-09-19 07:01:03 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:03.339788 | orchestrator | 2025-09-19 07:01:03 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state STARTED 2025-09-19 07:01:03.339796 | orchestrator | 2025-09-19 07:01:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:06.408947 | orchestrator | 2025-09-19 07:01:06 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:06.409161 | orchestrator | 2025-09-19 07:01:06 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:06.410211 | orchestrator | 2025-09-19 07:01:06 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:06.411127 | orchestrator | 2025-09-19 07:01:06 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:06.414533 | orchestrator | 2025-09-19 07:01:06 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:06.415077 | orchestrator | 2025-09-19 07:01:06 | INFO  | Task 1699e950-d500-428f-9749-6a44bf1ec964 is in state SUCCESS 2025-09-19 07:01:06.415221 | orchestrator | 2025-09-19 07:01:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:09.452051 | orchestrator | 2025-09-19 07:01:09 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:09.453178 | orchestrator | 2025-09-19 07:01:09 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:09.458203 | orchestrator | 2025-09-19 07:01:09 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:09.458269 | orchestrator | 2025-09-19 07:01:09 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:09.459137 | orchestrator | 2025-09-19 07:01:09 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:09.459462 | orchestrator | 2025-09-19 07:01:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:12.517377 | orchestrator | 2025-09-19 07:01:12 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:12.519765 | orchestrator | 2025-09-19 07:01:12 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:12.520987 | orchestrator | 2025-09-19 07:01:12 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:12.523302 | orchestrator | 2025-09-19 07:01:12 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:12.523455 | orchestrator | 2025-09-19 07:01:12 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:12.523476 | orchestrator | 2025-09-19 07:01:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:15.557991 | orchestrator | 2025-09-19 07:01:15 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:15.558247 | orchestrator | 2025-09-19 07:01:15 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:15.559046 | orchestrator | 2025-09-19 07:01:15 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:15.560207 | orchestrator | 2025-09-19 07:01:15 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:15.562267 | orchestrator | 2025-09-19 07:01:15 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:15.562445 | orchestrator | 2025-09-19 07:01:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:18.635843 | orchestrator | 2025-09-19 07:01:18 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:18.635955 | orchestrator | 2025-09-19 07:01:18 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:18.635970 | orchestrator | 2025-09-19 07:01:18 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:18.637647 | orchestrator | 2025-09-19 07:01:18 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:18.641265 | orchestrator | 2025-09-19 07:01:18 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:18.641526 | orchestrator | 2025-09-19 07:01:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:21.718464 | orchestrator | 2025-09-19 07:01:21 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:21.719290 | orchestrator | 2025-09-19 07:01:21 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:21.719986 | orchestrator | 2025-09-19 07:01:21 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:21.720780 | orchestrator | 2025-09-19 07:01:21 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:21.721804 | orchestrator | 2025-09-19 07:01:21 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:21.721848 | orchestrator | 2025-09-19 07:01:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:24.773606 | orchestrator | 2025-09-19 07:01:24 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:24.801770 | orchestrator | 2025-09-19 07:01:24 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:24.801849 | orchestrator | 2025-09-19 07:01:24 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:24.801860 | orchestrator | 2025-09-19 07:01:24 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:24.801869 | orchestrator | 2025-09-19 07:01:24 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:24.801878 | orchestrator | 2025-09-19 07:01:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:27.855107 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:27.855186 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:27.855201 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:27.855213 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:27.855226 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:27.855238 | orchestrator | 2025-09-19 07:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:30.874230 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state STARTED 2025-09-19 07:01:30.876173 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:30.876612 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:30.878589 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state STARTED 2025-09-19 07:01:30.879226 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:30.879252 | orchestrator | 2025-09-19 07:01:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:33.916869 | orchestrator | 2025-09-19 07:01:33.916930 | orchestrator | 2025-09-19 07:01:33.916940 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-19 07:01:33.916948 | orchestrator | 2025-09-19 07:01:33.916955 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-19 07:01:33.916963 | orchestrator | Friday 19 September 2025 07:00:16 +0000 (0:00:00.546) 0:00:00.546 ****** 2025-09-19 07:01:33.916970 | orchestrator | ok: [testbed-manager] => { 2025-09-19 07:01:33.916978 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-19 07:01:33.916986 | orchestrator | } 2025-09-19 07:01:33.916993 | orchestrator | 2025-09-19 07:01:33.917000 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-19 07:01:33.917007 | orchestrator | Friday 19 September 2025 07:00:17 +0000 (0:00:00.511) 0:00:01.058 ****** 2025-09-19 07:01:33.917014 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.917021 | orchestrator | 2025-09-19 07:01:33.917029 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-19 07:01:33.917036 | orchestrator | Friday 19 September 2025 07:00:18 +0000 (0:00:01.179) 0:00:02.238 ****** 2025-09-19 07:01:33.917044 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-19 07:01:33.917051 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-19 07:01:33.917074 | orchestrator | 2025-09-19 07:01:33.917081 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-19 07:01:33.917089 | orchestrator | Friday 19 September 2025 07:00:20 +0000 (0:00:02.055) 0:00:04.293 ****** 2025-09-19 07:01:33.917096 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.917104 | orchestrator | 2025-09-19 07:01:33.917111 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-19 07:01:33.917119 | orchestrator | Friday 19 September 2025 07:00:23 +0000 (0:00:02.417) 0:00:06.711 ****** 2025-09-19 07:01:33.917126 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.917134 | orchestrator | 2025-09-19 07:01:33.917141 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-19 07:01:33.917148 | orchestrator | Friday 19 September 2025 07:00:25 +0000 (0:00:02.129) 0:00:08.840 ****** 2025-09-19 07:01:33.917156 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-19 07:01:33.917163 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.917171 | orchestrator | 2025-09-19 07:01:33.917178 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-19 07:01:33.917185 | orchestrator | Friday 19 September 2025 07:00:50 +0000 (0:00:25.536) 0:00:34.377 ****** 2025-09-19 07:01:33.917193 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.917201 | orchestrator | 2025-09-19 07:01:33.917208 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:01:33.917226 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:33.917234 | orchestrator | 2025-09-19 07:01:33.917242 | orchestrator | 2025-09-19 07:01:33.917249 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:01:33.917257 | orchestrator | Friday 19 September 2025 07:00:54 +0000 (0:00:03.427) 0:00:37.804 ****** 2025-09-19 07:01:33.917264 | orchestrator | =============================================================================== 2025-09-19 07:01:33.917271 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.54s 2025-09-19 07:01:33.917279 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.43s 2025-09-19 07:01:33.917302 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.42s 2025-09-19 07:01:33.917310 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.13s 2025-09-19 07:01:33.917318 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.06s 2025-09-19 07:01:33.917325 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.18s 2025-09-19 07:01:33.917332 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.51s 2025-09-19 07:01:33.917340 | orchestrator | 2025-09-19 07:01:33.917347 | orchestrator | 2025-09-19 07:01:33.917354 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-19 07:01:33.917362 | orchestrator | 2025-09-19 07:01:33.917369 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-19 07:01:33.917963 | orchestrator | Friday 19 September 2025 07:00:17 +0000 (0:00:00.415) 0:00:00.415 ****** 2025-09-19 07:01:33.917985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-19 07:01:33.917994 | orchestrator | 2025-09-19 07:01:33.918001 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-19 07:01:33.918116 | orchestrator | Friday 19 September 2025 07:00:18 +0000 (0:00:00.485) 0:00:00.901 ****** 2025-09-19 07:01:33.918124 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-19 07:01:33.918132 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-19 07:01:33.918139 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-19 07:01:33.918147 | orchestrator | 2025-09-19 07:01:33.918154 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-19 07:01:33.918162 | orchestrator | Friday 19 September 2025 07:00:20 +0000 (0:00:02.653) 0:00:03.555 ****** 2025-09-19 07:01:33.918169 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.918177 | orchestrator | 2025-09-19 07:01:33.918185 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-19 07:01:33.918192 | orchestrator | Friday 19 September 2025 07:00:22 +0000 (0:00:01.873) 0:00:05.428 ****** 2025-09-19 07:01:33.918212 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-19 07:01:33.918220 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.918228 | orchestrator | 2025-09-19 07:01:33.918235 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-19 07:01:33.918242 | orchestrator | Friday 19 September 2025 07:00:55 +0000 (0:00:32.315) 0:00:37.743 ****** 2025-09-19 07:01:33.918250 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.918257 | orchestrator | 2025-09-19 07:01:33.918265 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-19 07:01:33.918272 | orchestrator | Friday 19 September 2025 07:00:56 +0000 (0:00:01.680) 0:00:39.424 ****** 2025-09-19 07:01:33.918280 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.918287 | orchestrator | 2025-09-19 07:01:33.918294 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-19 07:01:33.918302 | orchestrator | Friday 19 September 2025 07:00:57 +0000 (0:00:00.564) 0:00:39.989 ****** 2025-09-19 07:01:33.918309 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.918316 | orchestrator | 2025-09-19 07:01:33.918324 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-19 07:01:33.918331 | orchestrator | Friday 19 September 2025 07:01:00 +0000 (0:00:03.418) 0:00:43.408 ****** 2025-09-19 07:01:33.918338 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.918346 | orchestrator | 2025-09-19 07:01:33.918353 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-19 07:01:33.918360 | orchestrator | Friday 19 September 2025 07:01:02 +0000 (0:00:01.967) 0:00:45.375 ****** 2025-09-19 07:01:33.918368 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.918375 | orchestrator | 2025-09-19 07:01:33.918407 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-19 07:01:33.918415 | orchestrator | Friday 19 September 2025 07:01:04 +0000 (0:00:01.953) 0:00:47.329 ****** 2025-09-19 07:01:33.918423 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.918430 | orchestrator | 2025-09-19 07:01:33.918438 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:01:33.918445 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:33.918453 | orchestrator | 2025-09-19 07:01:33.918460 | orchestrator | 2025-09-19 07:01:33.918468 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:01:33.918475 | orchestrator | Friday 19 September 2025 07:01:05 +0000 (0:00:00.474) 0:00:47.803 ****** 2025-09-19 07:01:33.918482 | orchestrator | =============================================================================== 2025-09-19 07:01:33.918490 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.32s 2025-09-19 07:01:33.918502 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.42s 2025-09-19 07:01:33.918509 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.65s 2025-09-19 07:01:33.918517 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.97s 2025-09-19 07:01:33.918524 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.95s 2025-09-19 07:01:33.918531 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.87s 2025-09-19 07:01:33.918539 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.68s 2025-09-19 07:01:33.918546 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.56s 2025-09-19 07:01:33.918554 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.49s 2025-09-19 07:01:33.918561 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.47s 2025-09-19 07:01:33.918569 | orchestrator | 2025-09-19 07:01:33.918576 | orchestrator | 2025-09-19 07:01:33.918583 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-19 07:01:33.918591 | orchestrator | 2025-09-19 07:01:33.918598 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-19 07:01:33.918606 | orchestrator | Friday 19 September 2025 07:00:35 +0000 (0:00:00.240) 0:00:00.240 ****** 2025-09-19 07:01:33.918613 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.918620 | orchestrator | 2025-09-19 07:01:33.918628 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-19 07:01:33.918635 | orchestrator | Friday 19 September 2025 07:00:36 +0000 (0:00:00.779) 0:00:01.019 ****** 2025-09-19 07:01:33.918643 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-19 07:01:33.918650 | orchestrator | 2025-09-19 07:01:33.918657 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-19 07:01:33.918665 | orchestrator | Friday 19 September 2025 07:00:37 +0000 (0:00:00.532) 0:00:01.552 ****** 2025-09-19 07:01:33.918672 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.918680 | orchestrator | 2025-09-19 07:01:33.918687 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-19 07:01:33.918694 | orchestrator | Friday 19 September 2025 07:00:38 +0000 (0:00:01.006) 0:00:02.558 ****** 2025-09-19 07:01:33.918702 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-19 07:01:33.918709 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.918716 | orchestrator | 2025-09-19 07:01:33.918724 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-19 07:01:33.918731 | orchestrator | Friday 19 September 2025 07:01:26 +0000 (0:00:48.607) 0:00:51.165 ****** 2025-09-19 07:01:33.918739 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.918746 | orchestrator | 2025-09-19 07:01:33.918755 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:01:33.918768 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:33.918777 | orchestrator | 2025-09-19 07:01:33.918786 | orchestrator | 2025-09-19 07:01:33.918794 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:01:33.918808 | orchestrator | Friday 19 September 2025 07:01:31 +0000 (0:00:04.798) 0:00:55.964 ****** 2025-09-19 07:01:33.918817 | orchestrator | =============================================================================== 2025-09-19 07:01:33.918826 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 48.61s 2025-09-19 07:01:33.918834 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.80s 2025-09-19 07:01:33.918842 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.01s 2025-09-19 07:01:33.918851 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.78s 2025-09-19 07:01:33.918859 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.53s 2025-09-19 07:01:33.918868 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task de82d5e4-37a9-4777-964f-18b14aa5e9af is in state SUCCESS 2025-09-19 07:01:33.918877 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:33.918886 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:33.918894 | orchestrator | 2025-09-19 07:01:33.918902 | orchestrator | 2025-09-19 07:01:33.918911 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:01:33.918919 | orchestrator | 2025-09-19 07:01:33.918928 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:01:33.918937 | orchestrator | Friday 19 September 2025 07:00:18 +0000 (0:00:00.590) 0:00:00.590 ****** 2025-09-19 07:01:33.918945 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-19 07:01:33.918954 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-19 07:01:33.918963 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-19 07:01:33.918971 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-19 07:01:33.918980 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-19 07:01:33.918989 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-19 07:01:33.918997 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-19 07:01:33.919005 | orchestrator | 2025-09-19 07:01:33.919012 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-19 07:01:33.919020 | orchestrator | 2025-09-19 07:01:33.919027 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-19 07:01:33.919034 | orchestrator | Friday 19 September 2025 07:00:20 +0000 (0:00:02.437) 0:00:03.027 ****** 2025-09-19 07:01:33.919053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:01:33.919066 | orchestrator | 2025-09-19 07:01:33.919073 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-19 07:01:33.919081 | orchestrator | Friday 19 September 2025 07:00:21 +0000 (0:00:01.331) 0:00:04.359 ****** 2025-09-19 07:01:33.919088 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:01:33.919096 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:01:33.919103 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:01:33.919110 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:01:33.919118 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:01:33.919125 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:01:33.919132 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.919140 | orchestrator | 2025-09-19 07:01:33.919147 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-19 07:01:33.919158 | orchestrator | Friday 19 September 2025 07:00:24 +0000 (0:00:02.253) 0:00:06.613 ****** 2025-09-19 07:01:33.919166 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:01:33.919173 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:01:33.919180 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:01:33.919188 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:01:33.919195 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:01:33.919202 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:01:33.919210 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.919217 | orchestrator | 2025-09-19 07:01:33.919224 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-19 07:01:33.919232 | orchestrator | Friday 19 September 2025 07:00:28 +0000 (0:00:04.062) 0:00:10.676 ****** 2025-09-19 07:01:33.919239 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:01:33.919247 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:01:33.919254 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:01:33.919261 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.919269 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:01:33.919276 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:01:33.919283 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:01:33.919291 | orchestrator | 2025-09-19 07:01:33.919298 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-19 07:01:33.919306 | orchestrator | Friday 19 September 2025 07:00:30 +0000 (0:00:02.311) 0:00:12.988 ****** 2025-09-19 07:01:33.919313 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.919320 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:01:33.919328 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:01:33.919335 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:01:33.919342 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:01:33.919349 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:01:33.919357 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:01:33.919364 | orchestrator | 2025-09-19 07:01:33.919372 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-19 07:01:33.919379 | orchestrator | Friday 19 September 2025 07:00:40 +0000 (0:00:09.734) 0:00:22.722 ****** 2025-09-19 07:01:33.919386 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:01:33.919413 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:01:33.919420 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:01:33.919428 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:01:33.919435 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:01:33.919447 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:01:33.919455 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.919462 | orchestrator | 2025-09-19 07:01:33.919470 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-19 07:01:33.919477 | orchestrator | Friday 19 September 2025 07:01:07 +0000 (0:00:27.414) 0:00:50.137 ****** 2025-09-19 07:01:33.919485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:01:33.919494 | orchestrator | 2025-09-19 07:01:33.919501 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-19 07:01:33.919508 | orchestrator | Friday 19 September 2025 07:01:09 +0000 (0:00:02.068) 0:00:52.206 ****** 2025-09-19 07:01:33.919516 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-19 07:01:33.919523 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-19 07:01:33.919531 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-19 07:01:33.919538 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-19 07:01:33.919546 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-19 07:01:33.919553 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-19 07:01:33.919561 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-19 07:01:33.919572 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-19 07:01:33.919580 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-19 07:01:33.919587 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-19 07:01:33.919594 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-19 07:01:33.919602 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-19 07:01:33.919609 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-19 07:01:33.919616 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-19 07:01:33.919624 | orchestrator | 2025-09-19 07:01:33.919631 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-19 07:01:33.919639 | orchestrator | Friday 19 September 2025 07:01:15 +0000 (0:00:06.184) 0:00:58.390 ****** 2025-09-19 07:01:33.919647 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.919654 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:01:33.919661 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:01:33.919669 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:01:33.919676 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:01:33.919684 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:01:33.919691 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:01:33.919699 | orchestrator | 2025-09-19 07:01:33.919706 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-19 07:01:33.919714 | orchestrator | Friday 19 September 2025 07:01:16 +0000 (0:00:01.169) 0:00:59.560 ****** 2025-09-19 07:01:33.919721 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:01:33.919729 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.919736 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:01:33.919744 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:01:33.919751 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:01:33.919759 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:01:33.919766 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:01:33.919773 | orchestrator | 2025-09-19 07:01:33.919781 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-19 07:01:33.919789 | orchestrator | Friday 19 September 2025 07:01:18 +0000 (0:00:01.959) 0:01:01.520 ****** 2025-09-19 07:01:33.919796 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:01:33.919803 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.919811 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:01:33.919818 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:01:33.919826 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:01:33.919833 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:01:33.919841 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:01:33.919848 | orchestrator | 2025-09-19 07:01:33.919856 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-19 07:01:33.919863 | orchestrator | Friday 19 September 2025 07:01:20 +0000 (0:00:01.489) 0:01:03.009 ****** 2025-09-19 07:01:33.919871 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:01:33.919878 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:01:33.919885 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:33.919893 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:01:33.919900 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:01:33.919908 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:01:33.919915 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:01:33.919922 | orchestrator | 2025-09-19 07:01:33.919930 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-19 07:01:33.919937 | orchestrator | Friday 19 September 2025 07:01:23 +0000 (0:00:02.938) 0:01:05.948 ****** 2025-09-19 07:01:33.919945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-19 07:01:33.919973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:01:33.919986 | orchestrator | 2025-09-19 07:01:33.919994 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-19 07:01:33.920001 | orchestrator | Friday 19 September 2025 07:01:24 +0000 (0:00:01.329) 0:01:07.278 ****** 2025-09-19 07:01:33.920009 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.920016 | orchestrator | 2025-09-19 07:01:33.920024 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-19 07:01:33.920031 | orchestrator | Friday 19 September 2025 07:01:26 +0000 (0:00:02.221) 0:01:09.499 ****** 2025-09-19 07:01:33.920038 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:01:33.920046 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:01:33.920053 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:01:33.920060 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:01:33.920073 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:01:33.920080 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:33.920088 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:01:33.920095 | orchestrator | 2025-09-19 07:01:33.920103 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:01:33.920110 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:33.920118 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:33.920126 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:33.920133 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:33.920141 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:33.920148 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:33.920155 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:33.920163 | orchestrator | 2025-09-19 07:01:33.920170 | orchestrator | 2025-09-19 07:01:33.920178 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:01:33.920185 | orchestrator | Friday 19 September 2025 07:01:30 +0000 (0:00:03.669) 0:01:13.169 ****** 2025-09-19 07:01:33.920193 | orchestrator | =============================================================================== 2025-09-19 07:01:33.920205 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 27.41s 2025-09-19 07:01:33.920218 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.73s 2025-09-19 07:01:33.920231 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.18s 2025-09-19 07:01:33.920243 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.06s 2025-09-19 07:01:33.920255 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.67s 2025-09-19 07:01:33.920272 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.94s 2025-09-19 07:01:33.920286 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.44s 2025-09-19 07:01:33.920294 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.31s 2025-09-19 07:01:33.920301 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.25s 2025-09-19 07:01:33.920309 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.22s 2025-09-19 07:01:33.920316 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.07s 2025-09-19 07:01:33.920324 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.96s 2025-09-19 07:01:33.920336 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.49s 2025-09-19 07:01:33.920344 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.33s 2025-09-19 07:01:33.920351 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.33s 2025-09-19 07:01:33.920359 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.17s 2025-09-19 07:01:33.920366 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task b41a3762-5555-486f-a669-847f393a3f9b is in state SUCCESS 2025-09-19 07:01:33.920373 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:33.920381 | orchestrator | 2025-09-19 07:01:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:36.947992 | orchestrator | 2025-09-19 07:01:36 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:36.949353 | orchestrator | 2025-09-19 07:01:36 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:36.952121 | orchestrator | 2025-09-19 07:01:36 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:36.952567 | orchestrator | 2025-09-19 07:01:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:39.994504 | orchestrator | 2025-09-19 07:01:39 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:39.994923 | orchestrator | 2025-09-19 07:01:39 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:39.996215 | orchestrator | 2025-09-19 07:01:39 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:39.996347 | orchestrator | 2025-09-19 07:01:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:43.029088 | orchestrator | 2025-09-19 07:01:43 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:43.029597 | orchestrator | 2025-09-19 07:01:43 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:43.030463 | orchestrator | 2025-09-19 07:01:43 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:43.030497 | orchestrator | 2025-09-19 07:01:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:46.075137 | orchestrator | 2025-09-19 07:01:46 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:46.077262 | orchestrator | 2025-09-19 07:01:46 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:46.079984 | orchestrator | 2025-09-19 07:01:46 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:46.080037 | orchestrator | 2025-09-19 07:01:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:49.281797 | orchestrator | 2025-09-19 07:01:49 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:49.281891 | orchestrator | 2025-09-19 07:01:49 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:49.281915 | orchestrator | 2025-09-19 07:01:49 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:49.281936 | orchestrator | 2025-09-19 07:01:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:52.310593 | orchestrator | 2025-09-19 07:01:52 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:52.311895 | orchestrator | 2025-09-19 07:01:52 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:52.314076 | orchestrator | 2025-09-19 07:01:52 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:52.314338 | orchestrator | 2025-09-19 07:01:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:55.353138 | orchestrator | 2025-09-19 07:01:55 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:55.354543 | orchestrator | 2025-09-19 07:01:55 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:55.356001 | orchestrator | 2025-09-19 07:01:55 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:55.356012 | orchestrator | 2025-09-19 07:01:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:58.391705 | orchestrator | 2025-09-19 07:01:58 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:01:58.393313 | orchestrator | 2025-09-19 07:01:58 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:01:58.395350 | orchestrator | 2025-09-19 07:01:58 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:01:58.395650 | orchestrator | 2025-09-19 07:01:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:01.428269 | orchestrator | 2025-09-19 07:02:01 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:01.428736 | orchestrator | 2025-09-19 07:02:01 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:01.429824 | orchestrator | 2025-09-19 07:02:01 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:01.429858 | orchestrator | 2025-09-19 07:02:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:04.463011 | orchestrator | 2025-09-19 07:02:04 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:04.464116 | orchestrator | 2025-09-19 07:02:04 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:04.464983 | orchestrator | 2025-09-19 07:02:04 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:04.465371 | orchestrator | 2025-09-19 07:02:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:07.497879 | orchestrator | 2025-09-19 07:02:07 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:07.498995 | orchestrator | 2025-09-19 07:02:07 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:07.499789 | orchestrator | 2025-09-19 07:02:07 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:07.499873 | orchestrator | 2025-09-19 07:02:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:10.531651 | orchestrator | 2025-09-19 07:02:10 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:10.532455 | orchestrator | 2025-09-19 07:02:10 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:10.533240 | orchestrator | 2025-09-19 07:02:10 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:10.533469 | orchestrator | 2025-09-19 07:02:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:13.576228 | orchestrator | 2025-09-19 07:02:13 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:13.577671 | orchestrator | 2025-09-19 07:02:13 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:13.578710 | orchestrator | 2025-09-19 07:02:13 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:13.578850 | orchestrator | 2025-09-19 07:02:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:16.614743 | orchestrator | 2025-09-19 07:02:16 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:16.615877 | orchestrator | 2025-09-19 07:02:16 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:16.617770 | orchestrator | 2025-09-19 07:02:16 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:16.617804 | orchestrator | 2025-09-19 07:02:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:19.653516 | orchestrator | 2025-09-19 07:02:19 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:19.655171 | orchestrator | 2025-09-19 07:02:19 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:19.657166 | orchestrator | 2025-09-19 07:02:19 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:19.657197 | orchestrator | 2025-09-19 07:02:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:22.697826 | orchestrator | 2025-09-19 07:02:22 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:22.701071 | orchestrator | 2025-09-19 07:02:22 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:22.702694 | orchestrator | 2025-09-19 07:02:22 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:22.703538 | orchestrator | 2025-09-19 07:02:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:25.745416 | orchestrator | 2025-09-19 07:02:25 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:25.745534 | orchestrator | 2025-09-19 07:02:25 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:25.746293 | orchestrator | 2025-09-19 07:02:25 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:25.746419 | orchestrator | 2025-09-19 07:02:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:28.795287 | orchestrator | 2025-09-19 07:02:28 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:28.796440 | orchestrator | 2025-09-19 07:02:28 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:28.799699 | orchestrator | 2025-09-19 07:02:28 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:28.799742 | orchestrator | 2025-09-19 07:02:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:31.845298 | orchestrator | 2025-09-19 07:02:31 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state STARTED 2025-09-19 07:02:31.845472 | orchestrator | 2025-09-19 07:02:31 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:31.845979 | orchestrator | 2025-09-19 07:02:31 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:31.846149 | orchestrator | 2025-09-19 07:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:34.883760 | orchestrator | 2025-09-19 07:02:34.883831 | orchestrator | 2025-09-19 07:02:34 | INFO  | Task db403354-e710-4956-969c-ff1607a3d5da is in state SUCCESS 2025-09-19 07:02:34.885424 | orchestrator | 2025-09-19 07:02:34.885455 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-19 07:02:34.885460 | orchestrator | 2025-09-19 07:02:34.885465 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 07:02:34.885469 | orchestrator | Friday 19 September 2025 07:00:10 +0000 (0:00:00.253) 0:00:00.253 ****** 2025-09-19 07:02:34.885487 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:02:34.885492 | orchestrator | 2025-09-19 07:02:34.885496 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-19 07:02:34.885500 | orchestrator | Friday 19 September 2025 07:00:11 +0000 (0:00:01.105) 0:00:01.358 ****** 2025-09-19 07:02:34.885504 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:02:34.885508 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:02:34.885521 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:02:34.885527 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:02:34.885533 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:02:34.885540 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:02:34.885545 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:02:34.885554 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:02:34.885560 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:02:34.885566 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:02:34.885601 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:02:34.885610 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:02:34.885616 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:02:34.885623 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:02:34.885630 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:02:34.885637 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:02:34.885642 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:02:34.885646 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:02:34.885650 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:02:34.885659 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:02:34.885663 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:02:34.885667 | orchestrator | 2025-09-19 07:02:34.885671 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 07:02:34.885675 | orchestrator | Friday 19 September 2025 07:00:15 +0000 (0:00:03.809) 0:00:05.168 ****** 2025-09-19 07:02:34.885679 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:02:34.885684 | orchestrator | 2025-09-19 07:02:34.885688 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-19 07:02:34.885692 | orchestrator | Friday 19 September 2025 07:00:16 +0000 (0:00:01.227) 0:00:06.396 ****** 2025-09-19 07:02:34.885713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.885728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.885741 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.885746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.885750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.885754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.885761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.885766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885797 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885801 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885846 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.885850 | orchestrator | 2025-09-19 07:02:34.885854 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-19 07:02:34.885858 | orchestrator | Friday 19 September 2025 07:00:21 +0000 (0:00:05.186) 0:00:11.582 ****** 2025-09-19 07:02:34.885871 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.885875 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.885887 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.885893 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:02:34.885905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.885912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.885916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.885920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.885927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.885931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.885939 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:02:34.885943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.885948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.885955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.885960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.885964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.885968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.885972 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:02:34.885977 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:02:34.885981 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:02:34.885988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.885996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.886001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.886005 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:02:34.886063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.886070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.886075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.886080 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:02:34.886084 | orchestrator | 2025-09-19 07:02:34.886089 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-19 07:02:34.886093 | orchestrator | Friday 19 September 2025 07:00:23 +0000 (0:00:01.716) 0:00:13.298 ****** 2025-09-19 07:02:34.886098 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.886108 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.886113 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.886118 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:02:34.886125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.886136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.886177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.886184 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:02:34.886189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.886194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.886203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.886208 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:02:34.886215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.886220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.886225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.887515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.887605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.887621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.887654 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:02:34.887667 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:02:34.887680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.887700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.887712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.887724 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:02:34.887736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:02:34.887764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.887777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.887788 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:02:34.887800 | orchestrator | 2025-09-19 07:02:34.887811 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-19 07:02:34.887824 | orchestrator | Friday 19 September 2025 07:00:26 +0000 (0:00:03.363) 0:00:16.661 ****** 2025-09-19 07:02:34.887835 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:02:34.887846 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:02:34.887869 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:02:34.887880 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:02:34.887891 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:02:34.887902 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:02:34.887913 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:02:34.887924 | orchestrator | 2025-09-19 07:02:34.887936 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-19 07:02:34.887947 | orchestrator | Friday 19 September 2025 07:00:27 +0000 (0:00:00.832) 0:00:17.494 ****** 2025-09-19 07:02:34.887958 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:02:34.887969 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:02:34.887980 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:02:34.887991 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:02:34.888002 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:02:34.888015 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:02:34.888029 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:02:34.888041 | orchestrator | 2025-09-19 07:02:34.888055 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-19 07:02:34.888068 | orchestrator | Friday 19 September 2025 07:00:28 +0000 (0:00:01.161) 0:00:18.655 ****** 2025-09-19 07:02:34.888082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.888100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.888114 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.888128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.888155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.888175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888217 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.888231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.888244 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888441 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.888452 | orchestrator | 2025-09-19 07:02:34.888464 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-19 07:02:34.888475 | orchestrator | Friday 19 September 2025 07:00:35 +0000 (0:00:06.472) 0:00:25.127 ****** 2025-09-19 07:02:34.888487 | orchestrator | [WARNING]: Skipped 2025-09-19 07:02:34.888499 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-19 07:02:34.888510 | orchestrator | to this access issue: 2025-09-19 07:02:34.888522 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-19 07:02:34.888533 | orchestrator | directory 2025-09-19 07:02:34.888544 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:02:34.888555 | orchestrator | 2025-09-19 07:02:34.888566 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-19 07:02:34.888578 | orchestrator | Friday 19 September 2025 07:00:36 +0000 (0:00:01.404) 0:00:26.532 ****** 2025-09-19 07:02:34.888589 | orchestrator | [WARNING]: Skipped 2025-09-19 07:02:34.888600 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-19 07:02:34.888611 | orchestrator | to this access issue: 2025-09-19 07:02:34.888622 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-19 07:02:34.888633 | orchestrator | directory 2025-09-19 07:02:34.888644 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:02:34.888655 | orchestrator | 2025-09-19 07:02:34.888666 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-19 07:02:34.888678 | orchestrator | Friday 19 September 2025 07:00:37 +0000 (0:00:00.873) 0:00:27.406 ****** 2025-09-19 07:02:34.888689 | orchestrator | [WARNING]: Skipped 2025-09-19 07:02:34.888700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-19 07:02:34.888711 | orchestrator | to this access issue: 2025-09-19 07:02:34.888722 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-19 07:02:34.888733 | orchestrator | directory 2025-09-19 07:02:34.888752 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:02:34.888764 | orchestrator | 2025-09-19 07:02:34.888775 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-19 07:02:34.888786 | orchestrator | Friday 19 September 2025 07:00:38 +0000 (0:00:00.803) 0:00:28.210 ****** 2025-09-19 07:02:34.888797 | orchestrator | [WARNING]: Skipped 2025-09-19 07:02:34.888808 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-19 07:02:34.888819 | orchestrator | to this access issue: 2025-09-19 07:02:34.888835 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-19 07:02:34.888846 | orchestrator | directory 2025-09-19 07:02:34.888857 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:02:34.888869 | orchestrator | 2025-09-19 07:02:34.888880 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-19 07:02:34.888891 | orchestrator | Friday 19 September 2025 07:00:39 +0000 (0:00:00.893) 0:00:29.104 ****** 2025-09-19 07:02:34.888902 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:34.888919 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:34.888930 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:34.888941 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:34.888952 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:34.888963 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:34.888974 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:34.888985 | orchestrator | 2025-09-19 07:02:34.888996 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-19 07:02:34.889007 | orchestrator | Friday 19 September 2025 07:00:42 +0000 (0:00:03.641) 0:00:32.745 ****** 2025-09-19 07:02:34.889019 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:02:34.889030 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:02:34.889041 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:02:34.889052 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:02:34.889063 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:02:34.889074 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:02:34.889085 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:02:34.889097 | orchestrator | 2025-09-19 07:02:34.889108 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-19 07:02:34.889119 | orchestrator | Friday 19 September 2025 07:00:45 +0000 (0:00:02.599) 0:00:35.345 ****** 2025-09-19 07:02:34.889130 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:34.889142 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:34.889153 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:34.889164 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:34.889181 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:34.889193 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:34.889204 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:34.889215 | orchestrator | 2025-09-19 07:02:34.889226 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-19 07:02:34.889237 | orchestrator | Friday 19 September 2025 07:00:48 +0000 (0:00:03.283) 0:00:38.629 ****** 2025-09-19 07:02:34.889249 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889261 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.889273 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.889308 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.889332 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.889418 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.889433 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.889464 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.889481 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.889494 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.889524 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.889548 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:02:34.889582 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.889594 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.889606 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.889617 | orchestrator | 2025-09-19 07:02:34.889629 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-19 07:02:34.889640 | orchestrator | Friday 19 September 2025 07:00:51 +0000 (0:00:02.964) 0:00:41.594 ****** 2025-09-19 07:02:34.889651 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:02:34.889662 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:02:34.889673 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:02:34.889684 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:02:34.889695 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:02:34.889707 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:02:34.889718 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:02:34.889729 | orchestrator | 2025-09-19 07:02:34.889746 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-19 07:02:34.889758 | orchestrator | Friday 19 September 2025 07:00:55 +0000 (0:00:03.632) 0:00:45.226 ****** 2025-09-19 07:02:34.889769 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:02:34.889781 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:02:34.889792 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:02:34.889803 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:02:34.889814 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:02:34.889825 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:02:34.889845 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:02:34.889857 | orchestrator | 2025-09-19 07:02:34.889868 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-19 07:02:34.889879 | orchestrator | Friday 19 September 2025 07:00:57 +0000 (0:00:02.706) 0:00:47.932 ****** 2025-09-19 07:02:34.889892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889912 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.889979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890093 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.890136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.890161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890188 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:02:34.890210 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890264 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890357 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890391 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:02:34.890414 | orchestrator | 2025-09-19 07:02:34.890425 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-19 07:02:34.890444 | orchestrator | Friday 19 September 2025 07:01:01 +0000 (0:00:03.622) 0:00:51.555 ****** 2025-09-19 07:02:34.890462 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:34.890474 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:34.890485 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:34.890496 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:34.890507 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:34.890519 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:34.890530 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:34.890541 | orchestrator | 2025-09-19 07:02:34.890552 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-19 07:02:34.890572 | orchestrator | Friday 19 September 2025 07:01:03 +0000 (0:00:02.208) 0:00:53.763 ****** 2025-09-19 07:02:34.890598 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:34.890610 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:34.890631 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:34.890643 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:34.890654 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:34.890665 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:34.890675 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:34.890687 | orchestrator | 2025-09-19 07:02:34.890698 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:02:34.890709 | orchestrator | Friday 19 September 2025 07:01:05 +0000 (0:00:02.097) 0:00:55.861 ****** 2025-09-19 07:02:34.890720 | orchestrator | 2025-09-19 07:02:34.890731 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:02:34.890742 | orchestrator | Friday 19 September 2025 07:01:05 +0000 (0:00:00.095) 0:00:55.956 ****** 2025-09-19 07:02:34.890753 | orchestrator | 2025-09-19 07:02:34.890765 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:02:34.890776 | orchestrator | Friday 19 September 2025 07:01:06 +0000 (0:00:00.095) 0:00:56.052 ****** 2025-09-19 07:02:34.890787 | orchestrator | 2025-09-19 07:02:34.890798 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:02:34.890809 | orchestrator | Friday 19 September 2025 07:01:06 +0000 (0:00:00.261) 0:00:56.313 ****** 2025-09-19 07:02:34.890820 | orchestrator | 2025-09-19 07:02:34.890832 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:02:34.890843 | orchestrator | Friday 19 September 2025 07:01:06 +0000 (0:00:00.108) 0:00:56.421 ****** 2025-09-19 07:02:34.890854 | orchestrator | 2025-09-19 07:02:34.890865 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:02:34.890876 | orchestrator | Friday 19 September 2025 07:01:06 +0000 (0:00:00.068) 0:00:56.490 ****** 2025-09-19 07:02:34.890887 | orchestrator | 2025-09-19 07:02:34.890898 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:02:34.890909 | orchestrator | Friday 19 September 2025 07:01:06 +0000 (0:00:00.077) 0:00:56.567 ****** 2025-09-19 07:02:34.890920 | orchestrator | 2025-09-19 07:02:34.890931 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-19 07:02:34.890942 | orchestrator | Friday 19 September 2025 07:01:06 +0000 (0:00:00.108) 0:00:56.676 ****** 2025-09-19 07:02:34.890953 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:34.890964 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:34.890976 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:34.890987 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:34.890998 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:34.891009 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:34.891020 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:34.891031 | orchestrator | 2025-09-19 07:02:34.891042 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-19 07:02:34.891058 | orchestrator | Friday 19 September 2025 07:01:45 +0000 (0:00:39.245) 0:01:35.921 ****** 2025-09-19 07:02:34.891077 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:34.891088 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:34.891100 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:34.891110 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:34.891122 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:34.891133 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:34.891144 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:34.891155 | orchestrator | 2025-09-19 07:02:34.891166 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-19 07:02:34.891177 | orchestrator | Friday 19 September 2025 07:02:22 +0000 (0:00:36.734) 0:02:12.656 ****** 2025-09-19 07:02:34.891188 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:34.891200 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:02:34.891211 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:02:34.891222 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:02:34.891234 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:02:34.891245 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:02:34.891256 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:02:34.891267 | orchestrator | 2025-09-19 07:02:34.891278 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-19 07:02:34.891289 | orchestrator | Friday 19 September 2025 07:02:24 +0000 (0:00:02.164) 0:02:14.821 ****** 2025-09-19 07:02:34.891300 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:34.891311 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:34.891323 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:34.891334 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:34.891344 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:34.891355 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:34.891384 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:34.891395 | orchestrator | 2025-09-19 07:02:34.891407 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:02:34.891420 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:02:34.891432 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:02:34.891443 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:02:34.891461 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:02:34.891473 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:02:34.891484 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:02:34.891496 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:02:34.891515 | orchestrator | 2025-09-19 07:02:34.891534 | orchestrator | 2025-09-19 07:02:34.891552 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:02:34.891570 | orchestrator | Friday 19 September 2025 07:02:32 +0000 (0:00:07.982) 0:02:22.803 ****** 2025-09-19 07:02:34.891587 | orchestrator | =============================================================================== 2025-09-19 07:02:34.891606 | orchestrator | common : Restart fluentd container ------------------------------------- 39.25s 2025-09-19 07:02:34.891626 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.73s 2025-09-19 07:02:34.891639 | orchestrator | common : Restart cron container ----------------------------------------- 7.98s 2025-09-19 07:02:34.891650 | orchestrator | common : Copying over config.json files for services -------------------- 6.47s 2025-09-19 07:02:34.891670 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.19s 2025-09-19 07:02:34.891681 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.81s 2025-09-19 07:02:34.891692 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.64s 2025-09-19 07:02:34.891703 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.63s 2025-09-19 07:02:34.891714 | orchestrator | common : Check common containers ---------------------------------------- 3.62s 2025-09-19 07:02:34.891725 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.36s 2025-09-19 07:02:34.891736 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.28s 2025-09-19 07:02:34.891747 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.96s 2025-09-19 07:02:34.891758 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.71s 2025-09-19 07:02:34.891768 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.60s 2025-09-19 07:02:34.891779 | orchestrator | common : Creating log volume -------------------------------------------- 2.21s 2025-09-19 07:02:34.891790 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.17s 2025-09-19 07:02:34.891802 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 2.10s 2025-09-19 07:02:34.891812 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.72s 2025-09-19 07:02:34.891829 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.41s 2025-09-19 07:02:34.891841 | orchestrator | common : include_tasks -------------------------------------------------- 1.23s 2025-09-19 07:02:34.891852 | orchestrator | 2025-09-19 07:02:34 | INFO  | Task d2f52bae-21db-4816-9c77-a0af370701b6 is in state STARTED 2025-09-19 07:02:34.891863 | orchestrator | 2025-09-19 07:02:34 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:02:34.891874 | orchestrator | 2025-09-19 07:02:34 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:34.891886 | orchestrator | 2025-09-19 07:02:34 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:02:34.891897 | orchestrator | 2025-09-19 07:02:34 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:34.891908 | orchestrator | 2025-09-19 07:02:34 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:02:34.891919 | orchestrator | 2025-09-19 07:02:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:37.933234 | orchestrator | 2025-09-19 07:02:37 | INFO  | Task d2f52bae-21db-4816-9c77-a0af370701b6 is in state STARTED 2025-09-19 07:02:37.935236 | orchestrator | 2025-09-19 07:02:37 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:02:37.938652 | orchestrator | 2025-09-19 07:02:37 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:37.942125 | orchestrator | 2025-09-19 07:02:37 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:02:37.943904 | orchestrator | 2025-09-19 07:02:37 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:37.949804 | orchestrator | 2025-09-19 07:02:37 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:02:37.949876 | orchestrator | 2025-09-19 07:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:40.978541 | orchestrator | 2025-09-19 07:02:40 | INFO  | Task d2f52bae-21db-4816-9c77-a0af370701b6 is in state STARTED 2025-09-19 07:02:40.978967 | orchestrator | 2025-09-19 07:02:40 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:02:40.979811 | orchestrator | 2025-09-19 07:02:40 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:40.980590 | orchestrator | 2025-09-19 07:02:40 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:02:40.981435 | orchestrator | 2025-09-19 07:02:40 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:40.982493 | orchestrator | 2025-09-19 07:02:40 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:02:40.982557 | orchestrator | 2025-09-19 07:02:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:44.015222 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task d2f52bae-21db-4816-9c77-a0af370701b6 is in state STARTED 2025-09-19 07:02:44.015313 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:02:44.015778 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:44.016633 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:02:44.017161 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:44.017706 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:02:44.017733 | orchestrator | 2025-09-19 07:02:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:47.050744 | orchestrator | 2025-09-19 07:02:47 | INFO  | Task d2f52bae-21db-4816-9c77-a0af370701b6 is in state STARTED 2025-09-19 07:02:47.051097 | orchestrator | 2025-09-19 07:02:47 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:02:47.052043 | orchestrator | 2025-09-19 07:02:47 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:47.052928 | orchestrator | 2025-09-19 07:02:47 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:02:47.053699 | orchestrator | 2025-09-19 07:02:47 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:47.054841 | orchestrator | 2025-09-19 07:02:47 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:02:47.054887 | orchestrator | 2025-09-19 07:02:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:50.079190 | orchestrator | 2025-09-19 07:02:50 | INFO  | Task d2f52bae-21db-4816-9c77-a0af370701b6 is in state STARTED 2025-09-19 07:02:50.081911 | orchestrator | 2025-09-19 07:02:50 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:02:50.083513 | orchestrator | 2025-09-19 07:02:50 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:50.085183 | orchestrator | 2025-09-19 07:02:50 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:02:50.087970 | orchestrator | 2025-09-19 07:02:50 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:50.090200 | orchestrator | 2025-09-19 07:02:50 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:02:50.090476 | orchestrator | 2025-09-19 07:02:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:53.125452 | orchestrator | 2025-09-19 07:02:53 | INFO  | Task d2f52bae-21db-4816-9c77-a0af370701b6 is in state STARTED 2025-09-19 07:02:53.128668 | orchestrator | 2025-09-19 07:02:53 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:02:53.129984 | orchestrator | 2025-09-19 07:02:53 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:53.131664 | orchestrator | 2025-09-19 07:02:53 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:02:53.134670 | orchestrator | 2025-09-19 07:02:53 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:53.135750 | orchestrator | 2025-09-19 07:02:53 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:02:53.137220 | orchestrator | 2025-09-19 07:02:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:56.165836 | orchestrator | 2025-09-19 07:02:56 | INFO  | Task d2f52bae-21db-4816-9c77-a0af370701b6 is in state SUCCESS 2025-09-19 07:02:56.168177 | orchestrator | 2025-09-19 07:02:56 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:02:56.171590 | orchestrator | 2025-09-19 07:02:56 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:56.173662 | orchestrator | 2025-09-19 07:02:56 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:02:56.177118 | orchestrator | 2025-09-19 07:02:56 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:02:56.178110 | orchestrator | 2025-09-19 07:02:56 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:56.179317 | orchestrator | 2025-09-19 07:02:56 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:02:56.179491 | orchestrator | 2025-09-19 07:02:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:59.206147 | orchestrator | 2025-09-19 07:02:59 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:02:59.206320 | orchestrator | 2025-09-19 07:02:59 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:02:59.207722 | orchestrator | 2025-09-19 07:02:59 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:02:59.208304 | orchestrator | 2025-09-19 07:02:59 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:02:59.209162 | orchestrator | 2025-09-19 07:02:59 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:02:59.210159 | orchestrator | 2025-09-19 07:02:59 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:02:59.210192 | orchestrator | 2025-09-19 07:02:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:02.256947 | orchestrator | 2025-09-19 07:03:02 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:02.259843 | orchestrator | 2025-09-19 07:03:02 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:02.261158 | orchestrator | 2025-09-19 07:03:02 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:02.264311 | orchestrator | 2025-09-19 07:03:02 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:02.266116 | orchestrator | 2025-09-19 07:03:02 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:02.270289 | orchestrator | 2025-09-19 07:03:02 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:03:02.270326 | orchestrator | 2025-09-19 07:03:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:05.311676 | orchestrator | 2025-09-19 07:03:05 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:05.312394 | orchestrator | 2025-09-19 07:03:05 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:05.313478 | orchestrator | 2025-09-19 07:03:05 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:05.314180 | orchestrator | 2025-09-19 07:03:05 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:05.315218 | orchestrator | 2025-09-19 07:03:05 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:05.316037 | orchestrator | 2025-09-19 07:03:05 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:03:05.316221 | orchestrator | 2025-09-19 07:03:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:08.397483 | orchestrator | 2025-09-19 07:03:08 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:08.398520 | orchestrator | 2025-09-19 07:03:08 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:08.399129 | orchestrator | 2025-09-19 07:03:08 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:08.400315 | orchestrator | 2025-09-19 07:03:08 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:08.402448 | orchestrator | 2025-09-19 07:03:08 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:08.407769 | orchestrator | 2025-09-19 07:03:08 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state STARTED 2025-09-19 07:03:08.407834 | orchestrator | 2025-09-19 07:03:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:11.479114 | orchestrator | 2025-09-19 07:03:11 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:11.481078 | orchestrator | 2025-09-19 07:03:11 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:11.481940 | orchestrator | 2025-09-19 07:03:11 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:11.482821 | orchestrator | 2025-09-19 07:03:11 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:11.483887 | orchestrator | 2025-09-19 07:03:11 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:11.489505 | orchestrator | 2025-09-19 07:03:11.489594 | orchestrator | 2025-09-19 07:03:11.489610 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:03:11.489625 | orchestrator | 2025-09-19 07:03:11.489637 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:03:11.489649 | orchestrator | Friday 19 September 2025 07:02:39 +0000 (0:00:00.399) 0:00:00.399 ****** 2025-09-19 07:03:11.489661 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:03:11.489673 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:03:11.489685 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:03:11.489696 | orchestrator | 2025-09-19 07:03:11.489708 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:03:11.489720 | orchestrator | Friday 19 September 2025 07:02:39 +0000 (0:00:00.427) 0:00:00.826 ****** 2025-09-19 07:03:11.489732 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-19 07:03:11.489744 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-19 07:03:11.489755 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-19 07:03:11.489766 | orchestrator | 2025-09-19 07:03:11.489778 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-19 07:03:11.489789 | orchestrator | 2025-09-19 07:03:11.489801 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-19 07:03:11.489812 | orchestrator | Friday 19 September 2025 07:02:40 +0000 (0:00:00.712) 0:00:01.539 ****** 2025-09-19 07:03:11.489844 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:03:11.489856 | orchestrator | 2025-09-19 07:03:11.489867 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-19 07:03:11.489878 | orchestrator | Friday 19 September 2025 07:02:41 +0000 (0:00:00.848) 0:00:02.387 ****** 2025-09-19 07:03:11.489890 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 07:03:11.489901 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 07:03:11.489913 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 07:03:11.489924 | orchestrator | 2025-09-19 07:03:11.489935 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-19 07:03:11.489946 | orchestrator | Friday 19 September 2025 07:02:42 +0000 (0:00:00.822) 0:00:03.210 ****** 2025-09-19 07:03:11.489957 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 07:03:11.489977 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 07:03:11.489989 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 07:03:11.490000 | orchestrator | 2025-09-19 07:03:11.490014 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-19 07:03:11.490103 | orchestrator | Friday 19 September 2025 07:02:44 +0000 (0:00:02.169) 0:00:05.379 ****** 2025-09-19 07:03:11.490117 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:11.490131 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:11.490144 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:11.490156 | orchestrator | 2025-09-19 07:03:11.490169 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-19 07:03:11.490182 | orchestrator | Friday 19 September 2025 07:02:46 +0000 (0:00:01.951) 0:00:07.330 ****** 2025-09-19 07:03:11.490194 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:11.490205 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:11.490216 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:11.490233 | orchestrator | 2025-09-19 07:03:11.490252 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:03:11.490271 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:03:11.490291 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:03:11.490309 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:03:11.490327 | orchestrator | 2025-09-19 07:03:11.490386 | orchestrator | 2025-09-19 07:03:11.490405 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:03:11.490423 | orchestrator | Friday 19 September 2025 07:02:53 +0000 (0:00:06.660) 0:00:13.991 ****** 2025-09-19 07:03:11.490440 | orchestrator | =============================================================================== 2025-09-19 07:03:11.490459 | orchestrator | memcached : Restart memcached container --------------------------------- 6.66s 2025-09-19 07:03:11.490478 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.17s 2025-09-19 07:03:11.490498 | orchestrator | memcached : Check memcached container ----------------------------------- 1.95s 2025-09-19 07:03:11.490516 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.85s 2025-09-19 07:03:11.490536 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.82s 2025-09-19 07:03:11.490548 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-09-19 07:03:11.490559 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-09-19 07:03:11.490570 | orchestrator | 2025-09-19 07:03:11.490581 | orchestrator | 2025-09-19 07:03:11.490592 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:03:11.490614 | orchestrator | 2025-09-19 07:03:11.490625 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:03:11.490636 | orchestrator | Friday 19 September 2025 07:02:39 +0000 (0:00:00.573) 0:00:00.573 ****** 2025-09-19 07:03:11.490647 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:03:11.490659 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:03:11.490670 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:03:11.490680 | orchestrator | 2025-09-19 07:03:11.490692 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:03:11.490721 | orchestrator | Friday 19 September 2025 07:02:39 +0000 (0:00:00.545) 0:00:01.119 ****** 2025-09-19 07:03:11.490734 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-19 07:03:11.490746 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-19 07:03:11.490757 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-19 07:03:11.490768 | orchestrator | 2025-09-19 07:03:11.490779 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-19 07:03:11.490791 | orchestrator | 2025-09-19 07:03:11.490802 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-19 07:03:11.490813 | orchestrator | Friday 19 September 2025 07:02:40 +0000 (0:00:00.717) 0:00:01.836 ****** 2025-09-19 07:03:11.490824 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:03:11.490835 | orchestrator | 2025-09-19 07:03:11.490846 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-19 07:03:11.490857 | orchestrator | Friday 19 September 2025 07:02:41 +0000 (0:00:00.739) 0:00:02.576 ****** 2025-09-19 07:03:11.490872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.490897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.490910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.490922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.490941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.490962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.490975 | orchestrator | 2025-09-19 07:03:11.490987 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-19 07:03:11.490998 | orchestrator | Friday 19 September 2025 07:02:42 +0000 (0:00:01.366) 0:00:03.942 ****** 2025-09-19 07:03:11.491010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491110 | orchestrator | 2025-09-19 07:03:11.491122 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-19 07:03:11.491134 | orchestrator | Friday 19 September 2025 07:02:45 +0000 (0:00:03.088) 0:00:07.031 ****** 2025-09-19 07:03:11.491145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491234 | orchestrator | 2025-09-19 07:03:11.491245 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-19 07:03:11.491257 | orchestrator | Friday 19 September 2025 07:02:48 +0000 (0:00:02.897) 0:00:09.929 ****** 2025-09-19 07:03:11.491269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:03:11.491401 | orchestrator | 2025-09-19 07:03:11.491413 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 07:03:11.491425 | orchestrator | Friday 19 September 2025 07:02:49 +0000 (0:00:01.564) 0:00:11.493 ****** 2025-09-19 07:03:11.491436 | orchestrator | 2025-09-19 07:03:11.491448 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 07:03:11.491459 | orchestrator | Friday 19 September 2025 07:02:50 +0000 (0:00:00.071) 0:00:11.565 ****** 2025-09-19 07:03:11.491470 | orchestrator | 2025-09-19 07:03:11.491481 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 07:03:11.491493 | orchestrator | Friday 19 September 2025 07:02:50 +0000 (0:00:00.062) 0:00:11.628 ****** 2025-09-19 07:03:11.491504 | orchestrator | 2025-09-19 07:03:11.491515 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-19 07:03:11.491526 | orchestrator | Friday 19 September 2025 07:02:50 +0000 (0:00:00.063) 0:00:11.691 ****** 2025-09-19 07:03:11.491538 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:11.491549 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:11.491560 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:11.491571 | orchestrator | 2025-09-19 07:03:11.491582 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-19 07:03:11.491594 | orchestrator | Friday 19 September 2025 07:02:58 +0000 (0:00:08.312) 0:00:20.003 ****** 2025-09-19 07:03:11.491605 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:11.491616 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:11.491627 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:11.491638 | orchestrator | 2025-09-19 07:03:11.491650 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:03:11.491661 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:03:11.491680 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:03:11.491692 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:03:11.491703 | orchestrator | 2025-09-19 07:03:11.491715 | orchestrator | 2025-09-19 07:03:11.491726 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:03:11.491737 | orchestrator | Friday 19 September 2025 07:03:09 +0000 (0:00:11.352) 0:00:31.356 ****** 2025-09-19 07:03:11.491748 | orchestrator | =============================================================================== 2025-09-19 07:03:11.491759 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.35s 2025-09-19 07:03:11.491771 | orchestrator | redis : Restart redis container ----------------------------------------- 8.31s 2025-09-19 07:03:11.491782 | orchestrator | redis : Copying over default config.json files -------------------------- 3.09s 2025-09-19 07:03:11.491793 | orchestrator | redis : Copying over redis config files --------------------------------- 2.90s 2025-09-19 07:03:11.491804 | orchestrator | redis : Check redis containers ------------------------------------------ 1.56s 2025-09-19 07:03:11.491815 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.37s 2025-09-19 07:03:11.491826 | orchestrator | redis : include_tasks --------------------------------------------------- 0.74s 2025-09-19 07:03:11.491837 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-09-19 07:03:11.491848 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.55s 2025-09-19 07:03:11.491859 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2025-09-19 07:03:11.491871 | orchestrator | 2025-09-19 07:03:11 | INFO  | Task 51014e67-9026-46cd-8924-ab27831d4b51 is in state SUCCESS 2025-09-19 07:03:11.491882 | orchestrator | 2025-09-19 07:03:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:14.529270 | orchestrator | 2025-09-19 07:03:14 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:14.529896 | orchestrator | 2025-09-19 07:03:14 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:14.530864 | orchestrator | 2025-09-19 07:03:14 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:14.531688 | orchestrator | 2025-09-19 07:03:14 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:14.532755 | orchestrator | 2025-09-19 07:03:14 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:14.532800 | orchestrator | 2025-09-19 07:03:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:17.582989 | orchestrator | 2025-09-19 07:03:17 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:17.583613 | orchestrator | 2025-09-19 07:03:17 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:17.585241 | orchestrator | 2025-09-19 07:03:17 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:17.587737 | orchestrator | 2025-09-19 07:03:17 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:17.588334 | orchestrator | 2025-09-19 07:03:17 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:17.589134 | orchestrator | 2025-09-19 07:03:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:20.641307 | orchestrator | 2025-09-19 07:03:20 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:20.641424 | orchestrator | 2025-09-19 07:03:20 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:20.642774 | orchestrator | 2025-09-19 07:03:20 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:20.643570 | orchestrator | 2025-09-19 07:03:20 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:20.644744 | orchestrator | 2025-09-19 07:03:20 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:20.647167 | orchestrator | 2025-09-19 07:03:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:23.689617 | orchestrator | 2025-09-19 07:03:23 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:23.689992 | orchestrator | 2025-09-19 07:03:23 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:23.691165 | orchestrator | 2025-09-19 07:03:23 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:23.692684 | orchestrator | 2025-09-19 07:03:23 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:23.693970 | orchestrator | 2025-09-19 07:03:23 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:23.694062 | orchestrator | 2025-09-19 07:03:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:26.773585 | orchestrator | 2025-09-19 07:03:26 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:26.774183 | orchestrator | 2025-09-19 07:03:26 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:26.777591 | orchestrator | 2025-09-19 07:03:26 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:26.777853 | orchestrator | 2025-09-19 07:03:26 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:26.782298 | orchestrator | 2025-09-19 07:03:26 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:26.782396 | orchestrator | 2025-09-19 07:03:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:29.813176 | orchestrator | 2025-09-19 07:03:29 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:29.814571 | orchestrator | 2025-09-19 07:03:29 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:29.818097 | orchestrator | 2025-09-19 07:03:29 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:29.820145 | orchestrator | 2025-09-19 07:03:29 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:29.821090 | orchestrator | 2025-09-19 07:03:29 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:29.821111 | orchestrator | 2025-09-19 07:03:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:32.888755 | orchestrator | 2025-09-19 07:03:32 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:32.888894 | orchestrator | 2025-09-19 07:03:32 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:32.888922 | orchestrator | 2025-09-19 07:03:32 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:32.888943 | orchestrator | 2025-09-19 07:03:32 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:32.888962 | orchestrator | 2025-09-19 07:03:32 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:32.888982 | orchestrator | 2025-09-19 07:03:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:35.914528 | orchestrator | 2025-09-19 07:03:35 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:35.915456 | orchestrator | 2025-09-19 07:03:35 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:35.917606 | orchestrator | 2025-09-19 07:03:35 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:35.919758 | orchestrator | 2025-09-19 07:03:35 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:35.922838 | orchestrator | 2025-09-19 07:03:35 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:35.922881 | orchestrator | 2025-09-19 07:03:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:38.974890 | orchestrator | 2025-09-19 07:03:38 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:38.975034 | orchestrator | 2025-09-19 07:03:38 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:38.976267 | orchestrator | 2025-09-19 07:03:38 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:38.976782 | orchestrator | 2025-09-19 07:03:38 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:38.977310 | orchestrator | 2025-09-19 07:03:38 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:38.977321 | orchestrator | 2025-09-19 07:03:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:41.998984 | orchestrator | 2025-09-19 07:03:41 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:41.999160 | orchestrator | 2025-09-19 07:03:41 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:42.000018 | orchestrator | 2025-09-19 07:03:41 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:42.000489 | orchestrator | 2025-09-19 07:03:41 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:42.001410 | orchestrator | 2025-09-19 07:03:42 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:42.001448 | orchestrator | 2025-09-19 07:03:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:45.046934 | orchestrator | 2025-09-19 07:03:45 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:45.047174 | orchestrator | 2025-09-19 07:03:45 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:45.048146 | orchestrator | 2025-09-19 07:03:45 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:45.048733 | orchestrator | 2025-09-19 07:03:45 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state STARTED 2025-09-19 07:03:45.049560 | orchestrator | 2025-09-19 07:03:45 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:45.049762 | orchestrator | 2025-09-19 07:03:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:48.082925 | orchestrator | 2025-09-19 07:03:48 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:48.083047 | orchestrator | 2025-09-19 07:03:48 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:48.083405 | orchestrator | 2025-09-19 07:03:48 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:48.084757 | orchestrator | 2025-09-19 07:03:48.084834 | orchestrator | 2025-09-19 07:03:48 | INFO  | Task 9b915df4-8c65-46ad-ad8d-55148d82a08a is in state SUCCESS 2025-09-19 07:03:48.086518 | orchestrator | 2025-09-19 07:03:48.086558 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:03:48.086571 | orchestrator | 2025-09-19 07:03:48.086581 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:03:48.086592 | orchestrator | Friday 19 September 2025 07:02:39 +0000 (0:00:00.400) 0:00:00.400 ****** 2025-09-19 07:03:48.086602 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:03:48.086613 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:03:48.086623 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:03:48.086633 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:03:48.086644 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:03:48.086654 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:03:48.086664 | orchestrator | 2025-09-19 07:03:48.086674 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:03:48.086684 | orchestrator | Friday 19 September 2025 07:02:40 +0000 (0:00:01.087) 0:00:01.488 ****** 2025-09-19 07:03:48.086695 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:03:48.086705 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:03:48.086715 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:03:48.086725 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:03:48.086735 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:03:48.086745 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:03:48.086755 | orchestrator | 2025-09-19 07:03:48.086766 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-19 07:03:48.086777 | orchestrator | 2025-09-19 07:03:48.086787 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-19 07:03:48.086797 | orchestrator | Friday 19 September 2025 07:02:41 +0000 (0:00:00.961) 0:00:02.449 ****** 2025-09-19 07:03:48.086808 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:03:48.086819 | orchestrator | 2025-09-19 07:03:48.086829 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 07:03:48.086839 | orchestrator | Friday 19 September 2025 07:02:43 +0000 (0:00:01.770) 0:00:04.220 ****** 2025-09-19 07:03:48.086849 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 07:03:48.086860 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 07:03:48.086870 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 07:03:48.086880 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 07:03:48.086890 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 07:03:48.086899 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 07:03:48.086909 | orchestrator | 2025-09-19 07:03:48.086919 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 07:03:48.086929 | orchestrator | Friday 19 September 2025 07:02:44 +0000 (0:00:01.382) 0:00:05.602 ****** 2025-09-19 07:03:48.086939 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 07:03:48.086949 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 07:03:48.086959 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 07:03:48.086969 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 07:03:48.086979 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 07:03:48.086989 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 07:03:48.086999 | orchestrator | 2025-09-19 07:03:48.087009 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 07:03:48.087018 | orchestrator | Friday 19 September 2025 07:02:46 +0000 (0:00:01.811) 0:00:07.414 ****** 2025-09-19 07:03:48.087043 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-19 07:03:48.087064 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:03:48.087075 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-19 07:03:48.087085 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:03:48.087095 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-19 07:03:48.087107 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:03:48.087119 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-19 07:03:48.087131 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:03:48.087142 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-19 07:03:48.087154 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:03:48.087166 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-19 07:03:48.087178 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:03:48.087189 | orchestrator | 2025-09-19 07:03:48.087201 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-19 07:03:48.087213 | orchestrator | Friday 19 September 2025 07:02:48 +0000 (0:00:01.413) 0:00:08.827 ****** 2025-09-19 07:03:48.087225 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:03:48.087236 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:03:48.087247 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:03:48.087259 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:03:48.087271 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:03:48.087283 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:03:48.087295 | orchestrator | 2025-09-19 07:03:48.087307 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-19 07:03:48.087318 | orchestrator | Friday 19 September 2025 07:02:48 +0000 (0:00:00.759) 0:00:09.587 ****** 2025-09-19 07:03:48.087372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087404 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087442 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087483 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087551 | orchestrator | 2025-09-19 07:03:48.087562 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-19 07:03:48.087572 | orchestrator | Friday 19 September 2025 07:02:50 +0000 (0:00:01.666) 0:00:11.253 ****** 2025-09-19 07:03:48.087583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087594 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087744 | orchestrator | 2025-09-19 07:03:48.087754 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-19 07:03:48.087764 | orchestrator | Friday 19 September 2025 07:02:53 +0000 (0:00:02.726) 0:00:13.980 ****** 2025-09-19 07:03:48.087775 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:03:48.087785 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:03:48.087795 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:03:48.087814 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:03:48.087824 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:03:48.087834 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:03:48.087844 | orchestrator | 2025-09-19 07:03:48.087854 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-19 07:03:48.087864 | orchestrator | Friday 19 September 2025 07:02:54 +0000 (0:00:01.001) 0:00:14.981 ****** 2025-09-19 07:03:48.087875 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087970 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.087981 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.088006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.088017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.088033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:03:48.088044 | orchestrator | 2025-09-19 07:03:48.088054 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:03:48.088064 | orchestrator | Friday 19 September 2025 07:02:56 +0000 (0:00:02.168) 0:00:17.150 ****** 2025-09-19 07:03:48.088074 | orchestrator | 2025-09-19 07:03:48.088084 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:03:48.088095 | orchestrator | Friday 19 September 2025 07:02:56 +0000 (0:00:00.124) 0:00:17.274 ****** 2025-09-19 07:03:48.088104 | orchestrator | 2025-09-19 07:03:48.088114 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:03:48.088124 | orchestrator | Friday 19 September 2025 07:02:56 +0000 (0:00:00.170) 0:00:17.445 ****** 2025-09-19 07:03:48.088134 | orchestrator | 2025-09-19 07:03:48.088144 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:03:48.088154 | orchestrator | Friday 19 September 2025 07:02:56 +0000 (0:00:00.231) 0:00:17.676 ****** 2025-09-19 07:03:48.088164 | orchestrator | 2025-09-19 07:03:48.088187 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:03:48.088198 | orchestrator | Friday 19 September 2025 07:02:57 +0000 (0:00:00.229) 0:00:17.905 ****** 2025-09-19 07:03:48.088208 | orchestrator | 2025-09-19 07:03:48.088218 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:03:48.088228 | orchestrator | Friday 19 September 2025 07:02:57 +0000 (0:00:00.283) 0:00:18.189 ****** 2025-09-19 07:03:48.088238 | orchestrator | 2025-09-19 07:03:48.088252 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-19 07:03:48.088262 | orchestrator | Friday 19 September 2025 07:02:57 +0000 (0:00:00.240) 0:00:18.429 ****** 2025-09-19 07:03:48.088272 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:03:48.088282 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:03:48.088292 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:03:48.088303 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:48.088312 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:48.088323 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:48.088368 | orchestrator | 2025-09-19 07:03:48.088378 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-19 07:03:48.088388 | orchestrator | Friday 19 September 2025 07:03:11 +0000 (0:00:13.948) 0:00:32.378 ****** 2025-09-19 07:03:48.088398 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:03:48.088408 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:03:48.088418 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:03:48.088428 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:03:48.088438 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:03:48.088448 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:03:48.088458 | orchestrator | 2025-09-19 07:03:48.088468 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 07:03:48.088488 | orchestrator | Friday 19 September 2025 07:03:12 +0000 (0:00:01.293) 0:00:33.672 ****** 2025-09-19 07:03:48.088498 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:03:48.088508 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:48.088518 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:03:48.088528 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:48.088538 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:48.088548 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:03:48.088558 | orchestrator | 2025-09-19 07:03:48.088568 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-19 07:03:48.088578 | orchestrator | Friday 19 September 2025 07:03:21 +0000 (0:00:08.424) 0:00:42.096 ****** 2025-09-19 07:03:48.088594 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-19 07:03:48.088605 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-19 07:03:48.088616 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-19 07:03:48.088626 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-19 07:03:48.088636 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-19 07:03:48.088646 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-19 07:03:48.088656 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-19 07:03:48.088666 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-19 07:03:48.088676 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-19 07:03:48.088686 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-19 07:03:48.088696 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-19 07:03:48.088706 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-19 07:03:48.088716 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:03:48.088726 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:03:48.088736 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:03:48.088746 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:03:48.088755 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:03:48.088765 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:03:48.088775 | orchestrator | 2025-09-19 07:03:48.088786 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-19 07:03:48.088796 | orchestrator | Friday 19 September 2025 07:03:29 +0000 (0:00:07.892) 0:00:49.989 ****** 2025-09-19 07:03:48.088806 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-19 07:03:48.088816 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:03:48.088826 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-19 07:03:48.088836 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:03:48.088846 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-19 07:03:48.088862 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:03:48.088872 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-19 07:03:48.088882 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-19 07:03:48.088892 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-19 07:03:48.088902 | orchestrator | 2025-09-19 07:03:48.088912 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-19 07:03:48.088922 | orchestrator | Friday 19 September 2025 07:03:32 +0000 (0:00:03.056) 0:00:53.046 ****** 2025-09-19 07:03:48.088932 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-19 07:03:48.088942 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:03:48.088952 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-19 07:03:48.088962 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:03:48.088972 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-19 07:03:48.088982 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:03:48.088992 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-19 07:03:48.089003 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-19 07:03:48.089019 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-19 07:03:48.089029 | orchestrator | 2025-09-19 07:03:48.089039 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 07:03:48.089049 | orchestrator | Friday 19 September 2025 07:03:36 +0000 (0:00:04.287) 0:00:57.334 ****** 2025-09-19 07:03:48.089059 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:03:48.089069 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:03:48.089079 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:48.089089 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:48.089099 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:03:48.089109 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:48.089119 | orchestrator | 2025-09-19 07:03:48.089129 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:03:48.089140 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:03:48.089155 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:03:48.089166 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:03:48.089176 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:03:48.089186 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:03:48.089196 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:03:48.089206 | orchestrator | 2025-09-19 07:03:48.089217 | orchestrator | 2025-09-19 07:03:48.089227 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:03:48.089237 | orchestrator | Friday 19 September 2025 07:03:45 +0000 (0:00:08.747) 0:01:06.082 ****** 2025-09-19 07:03:48.089247 | orchestrator | =============================================================================== 2025-09-19 07:03:48.089257 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.17s 2025-09-19 07:03:48.089267 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 13.95s 2025-09-19 07:03:48.089277 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.89s 2025-09-19 07:03:48.089287 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.29s 2025-09-19 07:03:48.089304 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.06s 2025-09-19 07:03:48.089314 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.73s 2025-09-19 07:03:48.089388 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.17s 2025-09-19 07:03:48.089402 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.81s 2025-09-19 07:03:48.089412 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.77s 2025-09-19 07:03:48.089422 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.67s 2025-09-19 07:03:48.089432 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.41s 2025-09-19 07:03:48.089442 | orchestrator | module-load : Load modules ---------------------------------------------- 1.38s 2025-09-19 07:03:48.089452 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.29s 2025-09-19 07:03:48.089462 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.28s 2025-09-19 07:03:48.089470 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.09s 2025-09-19 07:03:48.089478 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.00s 2025-09-19 07:03:48.089486 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2025-09-19 07:03:48.089494 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.76s 2025-09-19 07:03:48.089503 | orchestrator | 2025-09-19 07:03:48 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:48.089511 | orchestrator | 2025-09-19 07:03:48 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:03:48.089519 | orchestrator | 2025-09-19 07:03:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:51.117462 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:51.118735 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:51.120112 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:51.122770 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:51.127643 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:03:51.127713 | orchestrator | 2025-09-19 07:03:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:54.171577 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:54.171994 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:54.173022 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:54.173841 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:54.174672 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:03:54.174692 | orchestrator | 2025-09-19 07:03:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:57.239077 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:03:57.240783 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:03:57.242738 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:03:57.244921 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:03:57.246641 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:03:57.246680 | orchestrator | 2025-09-19 07:03:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:00.291387 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:00.292570 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:00.293931 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:00.294341 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:00.295213 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:00.295927 | orchestrator | 2025-09-19 07:04:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:03.338067 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:03.338150 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:03.338160 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:03.338909 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:03.338927 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:03.338938 | orchestrator | 2025-09-19 07:04:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:06.382389 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:06.382481 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:06.384561 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:06.386661 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:06.390165 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:06.390844 | orchestrator | 2025-09-19 07:04:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:09.456069 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:09.477241 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:09.477310 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:09.477399 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:09.477418 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:09.477438 | orchestrator | 2025-09-19 07:04:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:12.514712 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:12.515059 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:12.516421 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:12.516892 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:12.517778 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:12.517894 | orchestrator | 2025-09-19 07:04:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:15.544627 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:15.546185 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:15.547066 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:15.547587 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:15.548190 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:15.548220 | orchestrator | 2025-09-19 07:04:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:18.626711 | orchestrator | 2025-09-19 07:04:18 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:18.628202 | orchestrator | 2025-09-19 07:04:18 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:18.629134 | orchestrator | 2025-09-19 07:04:18 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:18.631204 | orchestrator | 2025-09-19 07:04:18 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:18.631910 | orchestrator | 2025-09-19 07:04:18 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:18.631952 | orchestrator | 2025-09-19 07:04:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:21.684960 | orchestrator | 2025-09-19 07:04:21 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:21.685743 | orchestrator | 2025-09-19 07:04:21 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:21.687181 | orchestrator | 2025-09-19 07:04:21 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:21.688382 | orchestrator | 2025-09-19 07:04:21 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:21.691396 | orchestrator | 2025-09-19 07:04:21 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:21.691441 | orchestrator | 2025-09-19 07:04:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:24.723023 | orchestrator | 2025-09-19 07:04:24 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:24.725289 | orchestrator | 2025-09-19 07:04:24 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:24.727641 | orchestrator | 2025-09-19 07:04:24 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:24.729540 | orchestrator | 2025-09-19 07:04:24 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:24.732849 | orchestrator | 2025-09-19 07:04:24 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:24.732884 | orchestrator | 2025-09-19 07:04:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:27.767754 | orchestrator | 2025-09-19 07:04:27 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:27.774457 | orchestrator | 2025-09-19 07:04:27 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:27.777070 | orchestrator | 2025-09-19 07:04:27 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:27.778183 | orchestrator | 2025-09-19 07:04:27 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:27.780702 | orchestrator | 2025-09-19 07:04:27 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:27.780731 | orchestrator | 2025-09-19 07:04:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:30.829197 | orchestrator | 2025-09-19 07:04:30 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:30.831764 | orchestrator | 2025-09-19 07:04:30 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:30.832764 | orchestrator | 2025-09-19 07:04:30 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:30.834778 | orchestrator | 2025-09-19 07:04:30 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:30.836958 | orchestrator | 2025-09-19 07:04:30 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:30.836994 | orchestrator | 2025-09-19 07:04:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:33.921425 | orchestrator | 2025-09-19 07:04:33 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:33.921789 | orchestrator | 2025-09-19 07:04:33 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:33.927408 | orchestrator | 2025-09-19 07:04:33 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:33.933086 | orchestrator | 2025-09-19 07:04:33 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:33.937771 | orchestrator | 2025-09-19 07:04:33 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:33.938841 | orchestrator | 2025-09-19 07:04:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:36.984144 | orchestrator | 2025-09-19 07:04:36 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:36.985808 | orchestrator | 2025-09-19 07:04:36 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:36.987487 | orchestrator | 2025-09-19 07:04:36 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:36.987654 | orchestrator | 2025-09-19 07:04:36 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:36.988654 | orchestrator | 2025-09-19 07:04:36 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:36.988691 | orchestrator | 2025-09-19 07:04:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:40.024332 | orchestrator | 2025-09-19 07:04:40 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:40.025075 | orchestrator | 2025-09-19 07:04:40 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:40.026585 | orchestrator | 2025-09-19 07:04:40 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:40.027799 | orchestrator | 2025-09-19 07:04:40 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:40.028765 | orchestrator | 2025-09-19 07:04:40 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:40.028905 | orchestrator | 2025-09-19 07:04:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:43.069140 | orchestrator | 2025-09-19 07:04:43 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:43.071476 | orchestrator | 2025-09-19 07:04:43 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:43.073855 | orchestrator | 2025-09-19 07:04:43 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:43.075151 | orchestrator | 2025-09-19 07:04:43 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:43.077786 | orchestrator | 2025-09-19 07:04:43 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:43.078404 | orchestrator | 2025-09-19 07:04:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:46.120339 | orchestrator | 2025-09-19 07:04:46 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:46.123914 | orchestrator | 2025-09-19 07:04:46 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:46.124618 | orchestrator | 2025-09-19 07:04:46 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:46.127359 | orchestrator | 2025-09-19 07:04:46 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:46.129021 | orchestrator | 2025-09-19 07:04:46 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:46.129045 | orchestrator | 2025-09-19 07:04:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:49.178370 | orchestrator | 2025-09-19 07:04:49 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:49.179537 | orchestrator | 2025-09-19 07:04:49 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:49.181657 | orchestrator | 2025-09-19 07:04:49 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:49.183054 | orchestrator | 2025-09-19 07:04:49 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:49.185018 | orchestrator | 2025-09-19 07:04:49 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:49.185042 | orchestrator | 2025-09-19 07:04:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:52.225129 | orchestrator | 2025-09-19 07:04:52 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:52.225336 | orchestrator | 2025-09-19 07:04:52 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:52.227942 | orchestrator | 2025-09-19 07:04:52 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:52.227969 | orchestrator | 2025-09-19 07:04:52 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:52.228062 | orchestrator | 2025-09-19 07:04:52 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:52.228189 | orchestrator | 2025-09-19 07:04:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:55.259853 | orchestrator | 2025-09-19 07:04:55 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:55.261076 | orchestrator | 2025-09-19 07:04:55 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:55.262935 | orchestrator | 2025-09-19 07:04:55 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:55.265540 | orchestrator | 2025-09-19 07:04:55 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:55.267129 | orchestrator | 2025-09-19 07:04:55 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:55.267506 | orchestrator | 2025-09-19 07:04:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:58.301417 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:04:58.304020 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:04:58.304562 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:04:58.305736 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:04:58.306710 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:04:58.306754 | orchestrator | 2025-09-19 07:04:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:01.385477 | orchestrator | 2025-09-19 07:05:01 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:01.386793 | orchestrator | 2025-09-19 07:05:01 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:01.388042 | orchestrator | 2025-09-19 07:05:01 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:05:01.389225 | orchestrator | 2025-09-19 07:05:01 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:01.390903 | orchestrator | 2025-09-19 07:05:01 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:01.390928 | orchestrator | 2025-09-19 07:05:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:04.422726 | orchestrator | 2025-09-19 07:05:04 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:04.426117 | orchestrator | 2025-09-19 07:05:04 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:04.426157 | orchestrator | 2025-09-19 07:05:04 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:05:04.428073 | orchestrator | 2025-09-19 07:05:04 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:04.428880 | orchestrator | 2025-09-19 07:05:04 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:04.428901 | orchestrator | 2025-09-19 07:05:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:07.470097 | orchestrator | 2025-09-19 07:05:07 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:07.470328 | orchestrator | 2025-09-19 07:05:07 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:07.471935 | orchestrator | 2025-09-19 07:05:07 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:05:07.472367 | orchestrator | 2025-09-19 07:05:07 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:07.473594 | orchestrator | 2025-09-19 07:05:07 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:07.473906 | orchestrator | 2025-09-19 07:05:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:10.510165 | orchestrator | 2025-09-19 07:05:10 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:10.510746 | orchestrator | 2025-09-19 07:05:10 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:10.511604 | orchestrator | 2025-09-19 07:05:10 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:05:10.512730 | orchestrator | 2025-09-19 07:05:10 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:10.513732 | orchestrator | 2025-09-19 07:05:10 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:10.513782 | orchestrator | 2025-09-19 07:05:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:13.549860 | orchestrator | 2025-09-19 07:05:13 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:13.550139 | orchestrator | 2025-09-19 07:05:13 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:13.551005 | orchestrator | 2025-09-19 07:05:13 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:05:13.551963 | orchestrator | 2025-09-19 07:05:13 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:13.552822 | orchestrator | 2025-09-19 07:05:13 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:13.552854 | orchestrator | 2025-09-19 07:05:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:16.587040 | orchestrator | 2025-09-19 07:05:16 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:16.587330 | orchestrator | 2025-09-19 07:05:16 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:16.591170 | orchestrator | 2025-09-19 07:05:16 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state STARTED 2025-09-19 07:05:16.593451 | orchestrator | 2025-09-19 07:05:16 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:16.595918 | orchestrator | 2025-09-19 07:05:16 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:16.596234 | orchestrator | 2025-09-19 07:05:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:19.637854 | orchestrator | 2025-09-19 07:05:19 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:19.638115 | orchestrator | 2025-09-19 07:05:19 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:19.640668 | orchestrator | 2025-09-19 07:05:19 | INFO  | Task bd66b278-26fe-49d9-8d2f-9f9f75146d12 is in state SUCCESS 2025-09-19 07:05:19.641777 | orchestrator | 2025-09-19 07:05:19.641809 | orchestrator | 2025-09-19 07:05:19.641821 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-19 07:05:19.641833 | orchestrator | 2025-09-19 07:05:19.641844 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 07:05:19.641855 | orchestrator | Friday 19 September 2025 07:02:59 +0000 (0:00:00.456) 0:00:00.456 ****** 2025-09-19 07:05:19.641867 | orchestrator | ok: [localhost] => { 2025-09-19 07:05:19.641880 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-19 07:05:19.641892 | orchestrator | } 2025-09-19 07:05:19.641904 | orchestrator | 2025-09-19 07:05:19.641915 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-19 07:05:19.641926 | orchestrator | Friday 19 September 2025 07:02:59 +0000 (0:00:00.111) 0:00:00.568 ****** 2025-09-19 07:05:19.641938 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-19 07:05:19.641972 | orchestrator | ...ignoring 2025-09-19 07:05:19.641984 | orchestrator | 2025-09-19 07:05:19.641995 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-19 07:05:19.642006 | orchestrator | Friday 19 September 2025 07:03:03 +0000 (0:00:03.851) 0:00:04.420 ****** 2025-09-19 07:05:19.642057 | orchestrator | skipping: [localhost] 2025-09-19 07:05:19.642069 | orchestrator | 2025-09-19 07:05:19.642080 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-19 07:05:19.642091 | orchestrator | Friday 19 September 2025 07:03:03 +0000 (0:00:00.050) 0:00:04.470 ****** 2025-09-19 07:05:19.642102 | orchestrator | ok: [localhost] 2025-09-19 07:05:19.642113 | orchestrator | 2025-09-19 07:05:19.642124 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:05:19.642135 | orchestrator | 2025-09-19 07:05:19.642147 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:05:19.642158 | orchestrator | Friday 19 September 2025 07:03:03 +0000 (0:00:00.139) 0:00:04.610 ****** 2025-09-19 07:05:19.642169 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:19.642180 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:19.642191 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:19.642202 | orchestrator | 2025-09-19 07:05:19.642213 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:05:19.642225 | orchestrator | Friday 19 September 2025 07:03:03 +0000 (0:00:00.328) 0:00:04.939 ****** 2025-09-19 07:05:19.642236 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-19 07:05:19.642247 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-19 07:05:19.642258 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-19 07:05:19.642269 | orchestrator | 2025-09-19 07:05:19.642311 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-19 07:05:19.642332 | orchestrator | 2025-09-19 07:05:19.642350 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 07:05:19.642368 | orchestrator | Friday 19 September 2025 07:03:04 +0000 (0:00:00.463) 0:00:05.402 ****** 2025-09-19 07:05:19.642389 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:05:19.642406 | orchestrator | 2025-09-19 07:05:19.642419 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 07:05:19.642432 | orchestrator | Friday 19 September 2025 07:03:04 +0000 (0:00:00.424) 0:00:05.826 ****** 2025-09-19 07:05:19.642445 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:19.642459 | orchestrator | 2025-09-19 07:05:19.642473 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-19 07:05:19.642485 | orchestrator | Friday 19 September 2025 07:03:05 +0000 (0:00:00.934) 0:00:06.760 ****** 2025-09-19 07:05:19.642498 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:19.642513 | orchestrator | 2025-09-19 07:05:19.642526 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-19 07:05:19.642551 | orchestrator | Friday 19 September 2025 07:03:06 +0000 (0:00:00.335) 0:00:07.096 ****** 2025-09-19 07:05:19.642565 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:19.642577 | orchestrator | 2025-09-19 07:05:19.642589 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-19 07:05:19.642602 | orchestrator | Friday 19 September 2025 07:03:06 +0000 (0:00:00.327) 0:00:07.423 ****** 2025-09-19 07:05:19.642616 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:19.642628 | orchestrator | 2025-09-19 07:05:19.642641 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-19 07:05:19.642653 | orchestrator | Friday 19 September 2025 07:03:06 +0000 (0:00:00.423) 0:00:07.847 ****** 2025-09-19 07:05:19.642665 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:19.642678 | orchestrator | 2025-09-19 07:05:19.642692 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 07:05:19.642714 | orchestrator | Friday 19 September 2025 07:03:07 +0000 (0:00:00.534) 0:00:08.382 ****** 2025-09-19 07:05:19.642727 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:05:19.642740 | orchestrator | 2025-09-19 07:05:19.642752 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 07:05:19.642763 | orchestrator | Friday 19 September 2025 07:03:08 +0000 (0:00:01.081) 0:00:09.464 ****** 2025-09-19 07:05:19.642774 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:19.642785 | orchestrator | 2025-09-19 07:05:19.642796 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-19 07:05:19.642806 | orchestrator | Friday 19 September 2025 07:03:09 +0000 (0:00:00.835) 0:00:10.300 ****** 2025-09-19 07:05:19.642817 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:19.642828 | orchestrator | 2025-09-19 07:05:19.642839 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-19 07:05:19.642850 | orchestrator | Friday 19 September 2025 07:03:09 +0000 (0:00:00.575) 0:00:10.876 ****** 2025-09-19 07:05:19.642861 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:19.642872 | orchestrator | 2025-09-19 07:05:19.642901 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-19 07:05:19.642919 | orchestrator | Friday 19 September 2025 07:03:10 +0000 (0:00:00.641) 0:00:11.517 ****** 2025-09-19 07:05:19.642944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:05:19.642971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:05:19.642992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:05:19.643036 | orchestrator | 2025-09-19 07:05:19.643057 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-19 07:05:19.643077 | orchestrator | Friday 19 September 2025 07:03:11 +0000 (0:00:01.487) 0:00:13.005 ****** 2025-09-19 07:05:19.643106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:05:19.643120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:05:19.643133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:05:19.643152 | orchestrator | 2025-09-19 07:05:19.643174 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-19 07:05:19.643186 | orchestrator | Friday 19 September 2025 07:03:14 +0000 (0:00:02.871) 0:00:15.877 ****** 2025-09-19 07:05:19.643198 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 07:05:19.643209 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 07:05:19.643220 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 07:05:19.643231 | orchestrator | 2025-09-19 07:05:19.643242 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-19 07:05:19.643253 | orchestrator | Friday 19 September 2025 07:03:17 +0000 (0:00:03.152) 0:00:19.029 ****** 2025-09-19 07:05:19.643264 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 07:05:19.643297 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 07:05:19.643309 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 07:05:19.643320 | orchestrator | 2025-09-19 07:05:19.643331 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-19 07:05:19.643342 | orchestrator | Friday 19 September 2025 07:03:19 +0000 (0:00:01.965) 0:00:20.994 ****** 2025-09-19 07:05:19.643353 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 07:05:19.643364 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 07:05:19.643375 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 07:05:19.643386 | orchestrator | 2025-09-19 07:05:19.643409 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-19 07:05:19.643421 | orchestrator | Friday 19 September 2025 07:03:21 +0000 (0:00:01.826) 0:00:22.820 ****** 2025-09-19 07:05:19.643432 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 07:05:19.643443 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 07:05:19.643454 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 07:05:19.643465 | orchestrator | 2025-09-19 07:05:19.643477 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-19 07:05:19.643488 | orchestrator | Friday 19 September 2025 07:03:24 +0000 (0:00:02.684) 0:00:25.505 ****** 2025-09-19 07:05:19.643499 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 07:05:19.643510 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 07:05:19.643521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 07:05:19.643532 | orchestrator | 2025-09-19 07:05:19.643543 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-19 07:05:19.643554 | orchestrator | Friday 19 September 2025 07:03:26 +0000 (0:00:02.101) 0:00:27.607 ****** 2025-09-19 07:05:19.643564 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 07:05:19.643576 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 07:05:19.643587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 07:05:19.643598 | orchestrator | 2025-09-19 07:05:19.643609 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 07:05:19.643626 | orchestrator | Friday 19 September 2025 07:03:28 +0000 (0:00:01.573) 0:00:29.180 ****** 2025-09-19 07:05:19.643637 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:19.643649 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:19.643660 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:19.643671 | orchestrator | 2025-09-19 07:05:19.643682 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-19 07:05:19.643693 | orchestrator | Friday 19 September 2025 07:03:28 +0000 (0:00:00.445) 0:00:29.625 ****** 2025-09-19 07:05:19.643705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:05:19.643718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:05:19.643743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:05:19.643757 | orchestrator | 2025-09-19 07:05:19.643768 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-19 07:05:19.643779 | orchestrator | Friday 19 September 2025 07:03:30 +0000 (0:00:01.741) 0:00:31.367 ****** 2025-09-19 07:05:19.643797 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:19.643808 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:19.643819 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:19.643830 | orchestrator | 2025-09-19 07:05:19.643842 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-19 07:05:19.643853 | orchestrator | Friday 19 September 2025 07:03:31 +0000 (0:00:00.971) 0:00:32.338 ****** 2025-09-19 07:05:19.643864 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:19.643875 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:19.643886 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:19.643897 | orchestrator | 2025-09-19 07:05:19.643908 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-19 07:05:19.643919 | orchestrator | Friday 19 September 2025 07:03:39 +0000 (0:00:08.500) 0:00:40.839 ****** 2025-09-19 07:05:19.643930 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:19.643941 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:19.643952 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:19.643963 | orchestrator | 2025-09-19 07:05:19.643996 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 07:05:19.644007 | orchestrator | 2025-09-19 07:05:19.644019 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 07:05:19.644030 | orchestrator | Friday 19 September 2025 07:03:40 +0000 (0:00:00.368) 0:00:41.208 ****** 2025-09-19 07:05:19.644041 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:19.644058 | orchestrator | 2025-09-19 07:05:19.644077 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 07:05:19.644096 | orchestrator | Friday 19 September 2025 07:03:40 +0000 (0:00:00.607) 0:00:41.815 ****** 2025-09-19 07:05:19.644115 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:19.644134 | orchestrator | 2025-09-19 07:05:19.644152 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 07:05:19.644170 | orchestrator | Friday 19 September 2025 07:03:41 +0000 (0:00:00.295) 0:00:42.111 ****** 2025-09-19 07:05:19.644187 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:19.644204 | orchestrator | 2025-09-19 07:05:19.644221 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 07:05:19.644239 | orchestrator | Friday 19 September 2025 07:03:47 +0000 (0:00:06.526) 0:00:48.638 ****** 2025-09-19 07:05:19.644255 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:19.644306 | orchestrator | 2025-09-19 07:05:19.644329 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 07:05:19.644350 | orchestrator | 2025-09-19 07:05:19.644369 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 07:05:19.644388 | orchestrator | Friday 19 September 2025 07:04:38 +0000 (0:00:51.159) 0:01:39.797 ****** 2025-09-19 07:05:19.644406 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:19.644424 | orchestrator | 2025-09-19 07:05:19.644444 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 07:05:19.644463 | orchestrator | Friday 19 September 2025 07:04:39 +0000 (0:00:00.605) 0:01:40.402 ****** 2025-09-19 07:05:19.644486 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:19.644507 | orchestrator | 2025-09-19 07:05:19.644527 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 07:05:19.644545 | orchestrator | Friday 19 September 2025 07:04:39 +0000 (0:00:00.529) 0:01:40.932 ****** 2025-09-19 07:05:19.644563 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:19.644583 | orchestrator | 2025-09-19 07:05:19.644604 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 07:05:19.644622 | orchestrator | Friday 19 September 2025 07:04:41 +0000 (0:00:01.780) 0:01:42.712 ****** 2025-09-19 07:05:19.644640 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:19.644660 | orchestrator | 2025-09-19 07:05:19.644679 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 07:05:19.644698 | orchestrator | 2025-09-19 07:05:19.644710 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 07:05:19.644733 | orchestrator | Friday 19 September 2025 07:04:57 +0000 (0:00:15.763) 0:01:58.476 ****** 2025-09-19 07:05:19.644744 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:19.644756 | orchestrator | 2025-09-19 07:05:19.644767 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 07:05:19.644778 | orchestrator | Friday 19 September 2025 07:04:58 +0000 (0:00:00.582) 0:01:59.058 ****** 2025-09-19 07:05:19.644789 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:19.644800 | orchestrator | 2025-09-19 07:05:19.644811 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 07:05:19.644840 | orchestrator | Friday 19 September 2025 07:04:58 +0000 (0:00:00.207) 0:01:59.265 ****** 2025-09-19 07:05:19.644853 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:19.644864 | orchestrator | 2025-09-19 07:05:19.644875 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 07:05:19.644886 | orchestrator | Friday 19 September 2025 07:04:59 +0000 (0:00:01.659) 0:02:00.926 ****** 2025-09-19 07:05:19.644897 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:19.644908 | orchestrator | 2025-09-19 07:05:19.644920 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-19 07:05:19.644931 | orchestrator | 2025-09-19 07:05:19.644942 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-19 07:05:19.644953 | orchestrator | Friday 19 September 2025 07:05:15 +0000 (0:00:15.247) 0:02:16.174 ****** 2025-09-19 07:05:19.644964 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:05:19.644975 | orchestrator | 2025-09-19 07:05:19.644987 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-19 07:05:19.644998 | orchestrator | Friday 19 September 2025 07:05:15 +0000 (0:00:00.731) 0:02:16.906 ****** 2025-09-19 07:05:19.645009 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 07:05:19.645020 | orchestrator | enable_outward_rabbitmq_True 2025-09-19 07:05:19.645031 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 07:05:19.645042 | orchestrator | outward_rabbitmq_restart 2025-09-19 07:05:19.645053 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:19.645064 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:19.645075 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:19.645086 | orchestrator | 2025-09-19 07:05:19.645097 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-19 07:05:19.645108 | orchestrator | skipping: no hosts matched 2025-09-19 07:05:19.645119 | orchestrator | 2025-09-19 07:05:19.645130 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-19 07:05:19.645141 | orchestrator | skipping: no hosts matched 2025-09-19 07:05:19.645153 | orchestrator | 2025-09-19 07:05:19.645164 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-19 07:05:19.645175 | orchestrator | skipping: no hosts matched 2025-09-19 07:05:19.645186 | orchestrator | 2025-09-19 07:05:19.645197 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:05:19.645208 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 07:05:19.645220 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 07:05:19.645231 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:05:19.645242 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:05:19.645254 | orchestrator | 2025-09-19 07:05:19.645265 | orchestrator | 2025-09-19 07:05:19.645334 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:05:19.645353 | orchestrator | Friday 19 September 2025 07:05:18 +0000 (0:00:02.363) 0:02:19.269 ****** 2025-09-19 07:05:19.645365 | orchestrator | =============================================================================== 2025-09-19 07:05:19.645376 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 82.17s 2025-09-19 07:05:19.645387 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.97s 2025-09-19 07:05:19.645398 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.50s 2025-09-19 07:05:19.645409 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.85s 2025-09-19 07:05:19.645421 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.15s 2025-09-19 07:05:19.645432 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.87s 2025-09-19 07:05:19.645443 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.68s 2025-09-19 07:05:19.645454 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.36s 2025-09-19 07:05:19.645465 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.10s 2025-09-19 07:05:19.645476 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.97s 2025-09-19 07:05:19.645487 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.83s 2025-09-19 07:05:19.645498 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.79s 2025-09-19 07:05:19.645509 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.74s 2025-09-19 07:05:19.645520 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.57s 2025-09-19 07:05:19.645531 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.49s 2025-09-19 07:05:19.645542 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.08s 2025-09-19 07:05:19.645553 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.03s 2025-09-19 07:05:19.645564 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.97s 2025-09-19 07:05:19.645575 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.93s 2025-09-19 07:05:19.645587 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.84s 2025-09-19 07:05:19.645707 | orchestrator | 2025-09-19 07:05:19 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:19.645722 | orchestrator | 2025-09-19 07:05:19 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:19.645732 | orchestrator | 2025-09-19 07:05:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:22.668955 | orchestrator | 2025-09-19 07:05:22 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:22.669568 | orchestrator | 2025-09-19 07:05:22 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:22.670157 | orchestrator | 2025-09-19 07:05:22 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:22.673591 | orchestrator | 2025-09-19 07:05:22 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:22.673604 | orchestrator | 2025-09-19 07:05:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:25.706980 | orchestrator | 2025-09-19 07:05:25 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:25.707544 | orchestrator | 2025-09-19 07:05:25 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:25.708627 | orchestrator | 2025-09-19 07:05:25 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:25.709753 | orchestrator | 2025-09-19 07:05:25 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:25.710913 | orchestrator | 2025-09-19 07:05:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:28.752323 | orchestrator | 2025-09-19 07:05:28 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:28.754447 | orchestrator | 2025-09-19 07:05:28 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:28.755946 | orchestrator | 2025-09-19 07:05:28 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:28.757522 | orchestrator | 2025-09-19 07:05:28 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:28.757554 | orchestrator | 2025-09-19 07:05:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:31.797682 | orchestrator | 2025-09-19 07:05:31 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:31.798063 | orchestrator | 2025-09-19 07:05:31 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:31.800891 | orchestrator | 2025-09-19 07:05:31 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:31.801611 | orchestrator | 2025-09-19 07:05:31 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:31.801641 | orchestrator | 2025-09-19 07:05:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:34.845354 | orchestrator | 2025-09-19 07:05:34 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:34.846070 | orchestrator | 2025-09-19 07:05:34 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:34.847026 | orchestrator | 2025-09-19 07:05:34 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:34.848321 | orchestrator | 2025-09-19 07:05:34 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:34.848331 | orchestrator | 2025-09-19 07:05:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:37.886480 | orchestrator | 2025-09-19 07:05:37 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:37.886671 | orchestrator | 2025-09-19 07:05:37 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:37.890709 | orchestrator | 2025-09-19 07:05:37 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:37.891793 | orchestrator | 2025-09-19 07:05:37 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:37.891882 | orchestrator | 2025-09-19 07:05:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:40.940206 | orchestrator | 2025-09-19 07:05:40 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:40.942413 | orchestrator | 2025-09-19 07:05:40 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:40.944890 | orchestrator | 2025-09-19 07:05:40 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:40.947191 | orchestrator | 2025-09-19 07:05:40 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:40.947228 | orchestrator | 2025-09-19 07:05:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:43.993986 | orchestrator | 2025-09-19 07:05:43 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:43.996199 | orchestrator | 2025-09-19 07:05:43 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:43.997240 | orchestrator | 2025-09-19 07:05:43 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:43.999119 | orchestrator | 2025-09-19 07:05:43 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:43.999143 | orchestrator | 2025-09-19 07:05:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:47.045494 | orchestrator | 2025-09-19 07:05:47 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:47.045601 | orchestrator | 2025-09-19 07:05:47 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:47.046434 | orchestrator | 2025-09-19 07:05:47 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:47.046697 | orchestrator | 2025-09-19 07:05:47 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:47.046768 | orchestrator | 2025-09-19 07:05:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:50.079842 | orchestrator | 2025-09-19 07:05:50 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:50.082294 | orchestrator | 2025-09-19 07:05:50 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:50.083127 | orchestrator | 2025-09-19 07:05:50 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:50.083777 | orchestrator | 2025-09-19 07:05:50 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:50.083807 | orchestrator | 2025-09-19 07:05:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:53.117911 | orchestrator | 2025-09-19 07:05:53 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:53.118069 | orchestrator | 2025-09-19 07:05:53 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:53.120808 | orchestrator | 2025-09-19 07:05:53 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:53.122854 | orchestrator | 2025-09-19 07:05:53 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:53.122880 | orchestrator | 2025-09-19 07:05:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:56.151671 | orchestrator | 2025-09-19 07:05:56 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:56.151772 | orchestrator | 2025-09-19 07:05:56 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:56.153102 | orchestrator | 2025-09-19 07:05:56 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:56.153125 | orchestrator | 2025-09-19 07:05:56 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:56.153136 | orchestrator | 2025-09-19 07:05:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:59.194334 | orchestrator | 2025-09-19 07:05:59 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:05:59.194438 | orchestrator | 2025-09-19 07:05:59 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:05:59.194565 | orchestrator | 2025-09-19 07:05:59 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:05:59.195504 | orchestrator | 2025-09-19 07:05:59 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:05:59.195538 | orchestrator | 2025-09-19 07:05:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:02.240694 | orchestrator | 2025-09-19 07:06:02 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:02.242982 | orchestrator | 2025-09-19 07:06:02 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:02.243062 | orchestrator | 2025-09-19 07:06:02 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:02.244764 | orchestrator | 2025-09-19 07:06:02 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:06:02.244848 | orchestrator | 2025-09-19 07:06:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:05.290336 | orchestrator | 2025-09-19 07:06:05 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:05.292933 | orchestrator | 2025-09-19 07:06:05 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:05.295874 | orchestrator | 2025-09-19 07:06:05 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:05.298068 | orchestrator | 2025-09-19 07:06:05 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:06:05.298538 | orchestrator | 2025-09-19 07:06:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:08.347471 | orchestrator | 2025-09-19 07:06:08 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:08.352075 | orchestrator | 2025-09-19 07:06:08 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:08.354360 | orchestrator | 2025-09-19 07:06:08 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:08.357675 | orchestrator | 2025-09-19 07:06:08 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:06:08.358629 | orchestrator | 2025-09-19 07:06:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:11.396946 | orchestrator | 2025-09-19 07:06:11 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:11.399807 | orchestrator | 2025-09-19 07:06:11 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:11.399840 | orchestrator | 2025-09-19 07:06:11 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:11.401553 | orchestrator | 2025-09-19 07:06:11 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:06:11.401594 | orchestrator | 2025-09-19 07:06:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:14.447166 | orchestrator | 2025-09-19 07:06:14 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:14.450305 | orchestrator | 2025-09-19 07:06:14 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:14.452290 | orchestrator | 2025-09-19 07:06:14 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:14.454373 | orchestrator | 2025-09-19 07:06:14 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:06:14.454450 | orchestrator | 2025-09-19 07:06:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:17.495748 | orchestrator | 2025-09-19 07:06:17 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:17.495887 | orchestrator | 2025-09-19 07:06:17 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:17.495911 | orchestrator | 2025-09-19 07:06:17 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:17.496868 | orchestrator | 2025-09-19 07:06:17 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:06:17.496949 | orchestrator | 2025-09-19 07:06:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:20.534153 | orchestrator | 2025-09-19 07:06:20 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:20.535677 | orchestrator | 2025-09-19 07:06:20 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:20.536267 | orchestrator | 2025-09-19 07:06:20 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:20.537247 | orchestrator | 2025-09-19 07:06:20 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:06:20.537318 | orchestrator | 2025-09-19 07:06:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:23.572892 | orchestrator | 2025-09-19 07:06:23 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:23.574913 | orchestrator | 2025-09-19 07:06:23 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:23.574945 | orchestrator | 2025-09-19 07:06:23 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:23.575619 | orchestrator | 2025-09-19 07:06:23 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state STARTED 2025-09-19 07:06:23.575778 | orchestrator | 2025-09-19 07:06:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:26.622877 | orchestrator | 2025-09-19 07:06:26 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:26.623879 | orchestrator | 2025-09-19 07:06:26 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:26.624047 | orchestrator | 2025-09-19 07:06:26 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:26.627197 | orchestrator | 2025-09-19 07:06:26 | INFO  | Task 5289a6c0-f997-49a7-9110-fa59f90f19e8 is in state SUCCESS 2025-09-19 07:06:26.627310 | orchestrator | 2025-09-19 07:06:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:26.628462 | orchestrator | 2025-09-19 07:06:26.628501 | orchestrator | 2025-09-19 07:06:26.628515 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:06:26.628527 | orchestrator | 2025-09-19 07:06:26.628609 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:06:26.628623 | orchestrator | Friday 19 September 2025 07:03:50 +0000 (0:00:00.291) 0:00:00.291 ****** 2025-09-19 07:06:26.628635 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:06:26.628648 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:06:26.628659 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:06:26.628671 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.628682 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.628693 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.628704 | orchestrator | 2025-09-19 07:06:26.628716 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:06:26.628801 | orchestrator | Friday 19 September 2025 07:03:50 +0000 (0:00:00.833) 0:00:01.124 ****** 2025-09-19 07:06:26.628814 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-19 07:06:26.628826 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-19 07:06:26.628837 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-19 07:06:26.628848 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-19 07:06:26.628860 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-19 07:06:26.628871 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-19 07:06:26.628883 | orchestrator | 2025-09-19 07:06:26.628894 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-19 07:06:26.628905 | orchestrator | 2025-09-19 07:06:26.628942 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-19 07:06:26.628969 | orchestrator | Friday 19 September 2025 07:03:52 +0000 (0:00:01.580) 0:00:02.704 ****** 2025-09-19 07:06:26.628983 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:06:26.628995 | orchestrator | 2025-09-19 07:06:26.629007 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-19 07:06:26.629018 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:01.617) 0:00:04.321 ****** 2025-09-19 07:06:26.629032 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629046 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629120 | orchestrator | 2025-09-19 07:06:26.629132 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-19 07:06:26.629144 | orchestrator | Friday 19 September 2025 07:03:55 +0000 (0:00:01.671) 0:00:05.993 ****** 2025-09-19 07:06:26.629156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629263 | orchestrator | 2025-09-19 07:06:26.629274 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-19 07:06:26.629286 | orchestrator | Friday 19 September 2025 07:03:57 +0000 (0:00:01.676) 0:00:07.669 ****** 2025-09-19 07:06:26.629298 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629395 | orchestrator | 2025-09-19 07:06:26.629407 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-19 07:06:26.629419 | orchestrator | Friday 19 September 2025 07:03:58 +0000 (0:00:01.179) 0:00:08.849 ****** 2025-09-19 07:06:26.629430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629442 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629514 | orchestrator | 2025-09-19 07:06:26.629525 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-19 07:06:26.629536 | orchestrator | Friday 19 September 2025 07:04:00 +0000 (0:00:01.935) 0:00:10.784 ****** 2025-09-19 07:06:26.629548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629576 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.629622 | orchestrator | 2025-09-19 07:06:26.629634 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-19 07:06:26.629645 | orchestrator | Friday 19 September 2025 07:04:02 +0000 (0:00:02.098) 0:00:12.883 ****** 2025-09-19 07:06:26.629657 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:06:26.629669 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:06:26.629680 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:06:26.629691 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:26.629709 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:26.629720 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:26.629731 | orchestrator | 2025-09-19 07:06:26.629742 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-19 07:06:26.629754 | orchestrator | Friday 19 September 2025 07:04:05 +0000 (0:00:02.753) 0:00:15.636 ****** 2025-09-19 07:06:26.629765 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-19 07:06:26.629777 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-19 07:06:26.629788 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-19 07:06:26.629804 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-19 07:06:26.629816 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-19 07:06:26.629827 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-19 07:06:26.629839 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:06:26.629850 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:06:26.629862 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:06:26.629873 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:06:26.629884 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:06:26.629895 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:06:26.629907 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:06:26.629919 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:06:26.629935 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:06:26.629947 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:06:26.629959 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:06:26.629970 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:06:26.629981 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:06:26.629993 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:06:26.630004 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:06:26.630060 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:06:26.630076 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:06:26.630087 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:06:26.630098 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:06:26.630110 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:06:26.630121 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:06:26.630139 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:06:26.630150 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:06:26.630162 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:06:26.630173 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:06:26.630185 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:06:26.630196 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:06:26.630207 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:06:26.630219 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:06:26.630259 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:06:26.630270 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 07:06:26.630282 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 07:06:26.630293 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 07:06:26.630305 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 07:06:26.630324 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 07:06:26.630335 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 07:06:26.630347 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-19 07:06:26.630359 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-19 07:06:26.630370 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-19 07:06:26.630381 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-19 07:06:26.630393 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-19 07:06:26.630404 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-19 07:06:26.630415 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 07:06:26.630432 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 07:06:26.630444 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 07:06:26.630455 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 07:06:26.630467 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 07:06:26.630478 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 07:06:26.630490 | orchestrator | 2025-09-19 07:06:26.630501 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:06:26.630520 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:19.887) 0:00:35.524 ****** 2025-09-19 07:06:26.630531 | orchestrator | 2025-09-19 07:06:26.630543 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:06:26.630554 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.069) 0:00:35.593 ****** 2025-09-19 07:06:26.630565 | orchestrator | 2025-09-19 07:06:26.630576 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:06:26.630588 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.074) 0:00:35.667 ****** 2025-09-19 07:06:26.630599 | orchestrator | 2025-09-19 07:06:26.630610 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:06:26.630622 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.067) 0:00:35.735 ****** 2025-09-19 07:06:26.630633 | orchestrator | 2025-09-19 07:06:26.630644 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:06:26.630656 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.066) 0:00:35.801 ****** 2025-09-19 07:06:26.630667 | orchestrator | 2025-09-19 07:06:26.630678 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:06:26.630689 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.074) 0:00:35.876 ****** 2025-09-19 07:06:26.630701 | orchestrator | 2025-09-19 07:06:26.630712 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-19 07:06:26.630723 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.067) 0:00:35.944 ****** 2025-09-19 07:06:26.630735 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:06:26.630746 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:06:26.630757 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:06:26.630769 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.630780 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.630791 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.630802 | orchestrator | 2025-09-19 07:06:26.630813 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-19 07:06:26.630825 | orchestrator | Friday 19 September 2025 07:04:27 +0000 (0:00:02.089) 0:00:38.033 ****** 2025-09-19 07:06:26.630836 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:26.630847 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:26.630858 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:06:26.630870 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:26.630881 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:06:26.630892 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:06:26.630903 | orchestrator | 2025-09-19 07:06:26.630914 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-19 07:06:26.630926 | orchestrator | 2025-09-19 07:06:26.630937 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 07:06:26.630949 | orchestrator | Friday 19 September 2025 07:05:00 +0000 (0:00:32.579) 0:01:10.613 ****** 2025-09-19 07:06:26.630960 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:06:26.630971 | orchestrator | 2025-09-19 07:06:26.630982 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 07:06:26.630994 | orchestrator | Friday 19 September 2025 07:05:01 +0000 (0:00:01.242) 0:01:11.855 ****** 2025-09-19 07:06:26.631005 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:06:26.631016 | orchestrator | 2025-09-19 07:06:26.631033 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-19 07:06:26.631045 | orchestrator | Friday 19 September 2025 07:05:02 +0000 (0:00:00.539) 0:01:12.395 ****** 2025-09-19 07:06:26.631056 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.631068 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.631079 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.631090 | orchestrator | 2025-09-19 07:06:26.631102 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-19 07:06:26.631120 | orchestrator | Friday 19 September 2025 07:05:03 +0000 (0:00:01.054) 0:01:13.449 ****** 2025-09-19 07:06:26.631131 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.631142 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.631154 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.631165 | orchestrator | 2025-09-19 07:06:26.631176 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-19 07:06:26.631187 | orchestrator | Friday 19 September 2025 07:05:03 +0000 (0:00:00.397) 0:01:13.847 ****** 2025-09-19 07:06:26.631199 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.631210 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.631269 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.631283 | orchestrator | 2025-09-19 07:06:26.631295 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-19 07:06:26.631306 | orchestrator | Friday 19 September 2025 07:05:03 +0000 (0:00:00.361) 0:01:14.208 ****** 2025-09-19 07:06:26.631317 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.631328 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.631339 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.631351 | orchestrator | 2025-09-19 07:06:26.631362 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-19 07:06:26.631384 | orchestrator | Friday 19 September 2025 07:05:04 +0000 (0:00:00.343) 0:01:14.551 ****** 2025-09-19 07:06:26.631396 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.631407 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.631418 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.631429 | orchestrator | 2025-09-19 07:06:26.631441 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-19 07:06:26.631452 | orchestrator | Friday 19 September 2025 07:05:04 +0000 (0:00:00.565) 0:01:15.117 ****** 2025-09-19 07:06:26.631463 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.631474 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.631486 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.631497 | orchestrator | 2025-09-19 07:06:26.631508 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-19 07:06:26.631519 | orchestrator | Friday 19 September 2025 07:05:05 +0000 (0:00:00.325) 0:01:15.443 ****** 2025-09-19 07:06:26.631530 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.631540 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.631550 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.631560 | orchestrator | 2025-09-19 07:06:26.631570 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-19 07:06:26.631580 | orchestrator | Friday 19 September 2025 07:05:05 +0000 (0:00:00.389) 0:01:15.832 ****** 2025-09-19 07:06:26.631590 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.631600 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.631610 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.631620 | orchestrator | 2025-09-19 07:06:26.631630 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-19 07:06:26.631641 | orchestrator | Friday 19 September 2025 07:05:05 +0000 (0:00:00.295) 0:01:16.128 ****** 2025-09-19 07:06:26.631651 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.631661 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.631671 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.631680 | orchestrator | 2025-09-19 07:06:26.631691 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-19 07:06:26.631700 | orchestrator | Friday 19 September 2025 07:05:06 +0000 (0:00:00.571) 0:01:16.699 ****** 2025-09-19 07:06:26.631710 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.631720 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.631730 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.631740 | orchestrator | 2025-09-19 07:06:26.631750 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-19 07:06:26.631760 | orchestrator | Friday 19 September 2025 07:05:06 +0000 (0:00:00.326) 0:01:17.026 ****** 2025-09-19 07:06:26.631776 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.631786 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.631796 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.631806 | orchestrator | 2025-09-19 07:06:26.631816 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-19 07:06:26.631826 | orchestrator | Friday 19 September 2025 07:05:07 +0000 (0:00:00.307) 0:01:17.333 ****** 2025-09-19 07:06:26.631836 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.631846 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.631856 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.631866 | orchestrator | 2025-09-19 07:06:26.631876 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-19 07:06:26.631886 | orchestrator | Friday 19 September 2025 07:05:07 +0000 (0:00:00.328) 0:01:17.661 ****** 2025-09-19 07:06:26.631896 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.631906 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.631916 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.631926 | orchestrator | 2025-09-19 07:06:26.631936 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-19 07:06:26.631946 | orchestrator | Friday 19 September 2025 07:05:07 +0000 (0:00:00.529) 0:01:18.191 ****** 2025-09-19 07:06:26.631956 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.631966 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.631976 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.631986 | orchestrator | 2025-09-19 07:06:26.631996 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-19 07:06:26.632006 | orchestrator | Friday 19 September 2025 07:05:08 +0000 (0:00:00.354) 0:01:18.545 ****** 2025-09-19 07:06:26.632016 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.632026 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.632036 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.632045 | orchestrator | 2025-09-19 07:06:26.632061 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-19 07:06:26.632071 | orchestrator | Friday 19 September 2025 07:05:08 +0000 (0:00:00.337) 0:01:18.883 ****** 2025-09-19 07:06:26.632081 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.632091 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.632101 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.632111 | orchestrator | 2025-09-19 07:06:26.632121 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-19 07:06:26.632131 | orchestrator | Friday 19 September 2025 07:05:08 +0000 (0:00:00.293) 0:01:19.177 ****** 2025-09-19 07:06:26.632141 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.632151 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.632161 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.632170 | orchestrator | 2025-09-19 07:06:26.632180 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 07:06:26.632191 | orchestrator | Friday 19 September 2025 07:05:09 +0000 (0:00:00.539) 0:01:19.716 ****** 2025-09-19 07:06:26.632201 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:06:26.632211 | orchestrator | 2025-09-19 07:06:26.632234 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-19 07:06:26.632245 | orchestrator | Friday 19 September 2025 07:05:10 +0000 (0:00:00.626) 0:01:20.343 ****** 2025-09-19 07:06:26.632255 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.632265 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.632276 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.632285 | orchestrator | 2025-09-19 07:06:26.632300 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-19 07:06:26.632311 | orchestrator | Friday 19 September 2025 07:05:10 +0000 (0:00:00.661) 0:01:21.004 ****** 2025-09-19 07:06:26.632321 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.632337 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.632348 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.632358 | orchestrator | 2025-09-19 07:06:26.632368 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-19 07:06:26.632378 | orchestrator | Friday 19 September 2025 07:05:11 +0000 (0:00:00.729) 0:01:21.734 ****** 2025-09-19 07:06:26.632388 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.632398 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.632408 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.632418 | orchestrator | 2025-09-19 07:06:26.632428 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-19 07:06:26.632438 | orchestrator | Friday 19 September 2025 07:05:11 +0000 (0:00:00.405) 0:01:22.139 ****** 2025-09-19 07:06:26.632449 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.632458 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.632469 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.632479 | orchestrator | 2025-09-19 07:06:26.632489 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-19 07:06:26.632499 | orchestrator | Friday 19 September 2025 07:05:12 +0000 (0:00:00.351) 0:01:22.491 ****** 2025-09-19 07:06:26.632509 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.632519 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.632529 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.632539 | orchestrator | 2025-09-19 07:06:26.632549 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-19 07:06:26.632559 | orchestrator | Friday 19 September 2025 07:05:12 +0000 (0:00:00.336) 0:01:22.827 ****** 2025-09-19 07:06:26.632569 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.632579 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.632589 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.632599 | orchestrator | 2025-09-19 07:06:26.632609 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-19 07:06:26.632619 | orchestrator | Friday 19 September 2025 07:05:13 +0000 (0:00:00.576) 0:01:23.404 ****** 2025-09-19 07:06:26.632629 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.632639 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.632649 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.632659 | orchestrator | 2025-09-19 07:06:26.632669 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-19 07:06:26.632679 | orchestrator | Friday 19 September 2025 07:05:13 +0000 (0:00:00.438) 0:01:23.842 ****** 2025-09-19 07:06:26.632689 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.632699 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.632709 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.632719 | orchestrator | 2025-09-19 07:06:26.632729 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 07:06:26.632739 | orchestrator | Friday 19 September 2025 07:05:13 +0000 (0:00:00.329) 0:01:24.171 ****** 2025-09-19 07:06:26.632749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632866 | orchestrator | 2025-09-19 07:06:26.632876 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 07:06:26.632887 | orchestrator | Friday 19 September 2025 07:05:15 +0000 (0:00:01.501) 0:01:25.673 ****** 2025-09-19 07:06:26.632897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.632997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.633007 | orchestrator | 2025-09-19 07:06:26.633017 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 07:06:26.633027 | orchestrator | Friday 19 September 2025 07:05:19 +0000 (0:00:04.159) 0:01:29.833 ****** 2025-09-19 07:06:26.633037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.633048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.633064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.633080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.633091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.633101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.633112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.633188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.633208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.633218 | orchestrator | 2025-09-19 07:06:26.633292 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:06:26.633302 | orchestrator | Friday 19 September 2025 07:05:21 +0000 (0:00:02.317) 0:01:32.150 ****** 2025-09-19 07:06:26.633313 | orchestrator | 2025-09-19 07:06:26.633323 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:06:26.633333 | orchestrator | Friday 19 September 2025 07:05:21 +0000 (0:00:00.072) 0:01:32.223 ****** 2025-09-19 07:06:26.633343 | orchestrator | 2025-09-19 07:06:26.633353 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:06:26.633362 | orchestrator | Friday 19 September 2025 07:05:22 +0000 (0:00:00.079) 0:01:32.303 ****** 2025-09-19 07:06:26.633372 | orchestrator | 2025-09-19 07:06:26.633382 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 07:06:26.633392 | orchestrator | Friday 19 September 2025 07:05:22 +0000 (0:00:00.066) 0:01:32.370 ****** 2025-09-19 07:06:26.633412 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:26.633422 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:26.633432 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:26.633442 | orchestrator | 2025-09-19 07:06:26.633452 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 07:06:26.633462 | orchestrator | Friday 19 September 2025 07:05:29 +0000 (0:00:07.866) 0:01:40.236 ****** 2025-09-19 07:06:26.633472 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:26.633482 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:26.633492 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:26.633501 | orchestrator | 2025-09-19 07:06:26.633511 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 07:06:26.633521 | orchestrator | Friday 19 September 2025 07:05:37 +0000 (0:00:07.699) 0:01:47.936 ****** 2025-09-19 07:06:26.633531 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:26.633541 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:26.633552 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:26.633562 | orchestrator | 2025-09-19 07:06:26.633571 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 07:06:26.633581 | orchestrator | Friday 19 September 2025 07:05:45 +0000 (0:00:07.437) 0:01:55.373 ****** 2025-09-19 07:06:26.633591 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.633601 | orchestrator | 2025-09-19 07:06:26.633611 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 07:06:26.633621 | orchestrator | Friday 19 September 2025 07:05:45 +0000 (0:00:00.122) 0:01:55.496 ****** 2025-09-19 07:06:26.633631 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.633641 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.633651 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.633661 | orchestrator | 2025-09-19 07:06:26.633679 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 07:06:26.633689 | orchestrator | Friday 19 September 2025 07:05:46 +0000 (0:00:00.861) 0:01:56.358 ****** 2025-09-19 07:06:26.633699 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.633709 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.633719 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:26.633729 | orchestrator | 2025-09-19 07:06:26.633739 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 07:06:26.633749 | orchestrator | Friday 19 September 2025 07:05:46 +0000 (0:00:00.611) 0:01:56.970 ****** 2025-09-19 07:06:26.633759 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.633769 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.633779 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.633789 | orchestrator | 2025-09-19 07:06:26.633799 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 07:06:26.633809 | orchestrator | Friday 19 September 2025 07:05:47 +0000 (0:00:00.993) 0:01:57.963 ****** 2025-09-19 07:06:26.633819 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.633829 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.633839 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:26.633849 | orchestrator | 2025-09-19 07:06:26.633859 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 07:06:26.633869 | orchestrator | Friday 19 September 2025 07:05:48 +0000 (0:00:00.614) 0:01:58.578 ****** 2025-09-19 07:06:26.633879 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.633889 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.633899 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.633909 | orchestrator | 2025-09-19 07:06:26.633919 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 07:06:26.633934 | orchestrator | Friday 19 September 2025 07:05:49 +0000 (0:00:00.713) 0:01:59.291 ****** 2025-09-19 07:06:26.633944 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.633954 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.633964 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.633980 | orchestrator | 2025-09-19 07:06:26.633990 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-19 07:06:26.634000 | orchestrator | Friday 19 September 2025 07:05:49 +0000 (0:00:00.722) 0:02:00.014 ****** 2025-09-19 07:06:26.634010 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.634066 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.634076 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.634086 | orchestrator | 2025-09-19 07:06:26.634096 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 07:06:26.634106 | orchestrator | Friday 19 September 2025 07:05:50 +0000 (0:00:00.491) 0:02:00.505 ****** 2025-09-19 07:06:26.634117 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634127 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634138 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634149 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634159 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634169 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634186 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634197 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634218 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634275 | orchestrator | 2025-09-19 07:06:26.634286 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 07:06:26.634296 | orchestrator | Friday 19 September 2025 07:05:51 +0000 (0:00:01.357) 0:02:01.862 ****** 2025-09-19 07:06:26.634306 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634317 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634327 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634338 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634358 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634414 | orchestrator | 2025-09-19 07:06:26.634422 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 07:06:26.634430 | orchestrator | Friday 19 September 2025 07:05:56 +0000 (0:00:04.522) 0:02:06.385 ****** 2025-09-19 07:06:26.634443 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634452 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634460 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634477 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634523 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:06:26.634532 | orchestrator | 2025-09-19 07:06:26.634540 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:06:26.634548 | orchestrator | Friday 19 September 2025 07:05:58 +0000 (0:00:02.493) 0:02:08.879 ****** 2025-09-19 07:06:26.634556 | orchestrator | 2025-09-19 07:06:26.634565 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:06:26.634573 | orchestrator | Friday 19 September 2025 07:05:58 +0000 (0:00:00.061) 0:02:08.941 ****** 2025-09-19 07:06:26.634581 | orchestrator | 2025-09-19 07:06:26.634589 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:06:26.634601 | orchestrator | Friday 19 September 2025 07:05:58 +0000 (0:00:00.193) 0:02:09.134 ****** 2025-09-19 07:06:26.634609 | orchestrator | 2025-09-19 07:06:26.634617 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 07:06:26.634625 | orchestrator | Friday 19 September 2025 07:05:58 +0000 (0:00:00.059) 0:02:09.193 ****** 2025-09-19 07:06:26.634633 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:26.634641 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:26.634650 | orchestrator | 2025-09-19 07:06:26.634658 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 07:06:26.634666 | orchestrator | Friday 19 September 2025 07:06:05 +0000 (0:00:06.142) 0:02:15.335 ****** 2025-09-19 07:06:26.634674 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:26.634682 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:26.634690 | orchestrator | 2025-09-19 07:06:26.634699 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 07:06:26.634707 | orchestrator | Friday 19 September 2025 07:06:11 +0000 (0:00:06.203) 0:02:21.539 ****** 2025-09-19 07:06:26.634715 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:26.634723 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:26.634731 | orchestrator | 2025-09-19 07:06:26.634739 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 07:06:26.634747 | orchestrator | Friday 19 September 2025 07:06:17 +0000 (0:00:06.209) 0:02:27.748 ****** 2025-09-19 07:06:26.634756 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:26.634764 | orchestrator | 2025-09-19 07:06:26.634772 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 07:06:26.634780 | orchestrator | Friday 19 September 2025 07:06:17 +0000 (0:00:00.200) 0:02:27.949 ****** 2025-09-19 07:06:26.634789 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.634797 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.634805 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.634813 | orchestrator | 2025-09-19 07:06:26.634821 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 07:06:26.634829 | orchestrator | Friday 19 September 2025 07:06:18 +0000 (0:00:01.037) 0:02:28.986 ****** 2025-09-19 07:06:26.634838 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.634846 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.634854 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:26.634862 | orchestrator | 2025-09-19 07:06:26.634870 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 07:06:26.634878 | orchestrator | Friday 19 September 2025 07:06:19 +0000 (0:00:00.663) 0:02:29.649 ****** 2025-09-19 07:06:26.634886 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.634895 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.634909 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.634917 | orchestrator | 2025-09-19 07:06:26.634925 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 07:06:26.634933 | orchestrator | Friday 19 September 2025 07:06:20 +0000 (0:00:01.139) 0:02:30.789 ****** 2025-09-19 07:06:26.634942 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:26.634950 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:26.634958 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:26.634966 | orchestrator | 2025-09-19 07:06:26.634974 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 07:06:26.634982 | orchestrator | Friday 19 September 2025 07:06:21 +0000 (0:00:00.650) 0:02:31.440 ****** 2025-09-19 07:06:26.634991 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.634999 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.635007 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.635015 | orchestrator | 2025-09-19 07:06:26.635023 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 07:06:26.635031 | orchestrator | Friday 19 September 2025 07:06:22 +0000 (0:00:00.909) 0:02:32.350 ****** 2025-09-19 07:06:26.635040 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:26.635048 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:26.635056 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:26.635064 | orchestrator | 2025-09-19 07:06:26.635072 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:06:26.635081 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 07:06:26.635090 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 07:06:26.635103 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 07:06:26.635112 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:06:26.635120 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:06:26.635128 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:06:26.635136 | orchestrator | 2025-09-19 07:06:26.635144 | orchestrator | 2025-09-19 07:06:26.635153 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:06:26.635161 | orchestrator | Friday 19 September 2025 07:06:23 +0000 (0:00:01.139) 0:02:33.490 ****** 2025-09-19 07:06:26.635169 | orchestrator | =============================================================================== 2025-09-19 07:06:26.635177 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 32.58s 2025-09-19 07:06:26.635185 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.89s 2025-09-19 07:06:26.635194 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.01s 2025-09-19 07:06:26.635205 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.90s 2025-09-19 07:06:26.635214 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.65s 2025-09-19 07:06:26.635236 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.52s 2025-09-19 07:06:26.635245 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.16s 2025-09-19 07:06:26.635253 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.75s 2025-09-19 07:06:26.635261 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.49s 2025-09-19 07:06:26.635269 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.32s 2025-09-19 07:06:26.635283 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.10s 2025-09-19 07:06:26.635291 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.09s 2025-09-19 07:06:26.635299 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.94s 2025-09-19 07:06:26.635307 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.68s 2025-09-19 07:06:26.635315 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.67s 2025-09-19 07:06:26.635323 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.62s 2025-09-19 07:06:26.635332 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.58s 2025-09-19 07:06:26.635340 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.50s 2025-09-19 07:06:26.635348 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.36s 2025-09-19 07:06:26.635356 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.24s 2025-09-19 07:06:29.673906 | orchestrator | 2025-09-19 07:06:29 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:29.675979 | orchestrator | 2025-09-19 07:06:29 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:29.677797 | orchestrator | 2025-09-19 07:06:29 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:29.677993 | orchestrator | 2025-09-19 07:06:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:32.713642 | orchestrator | 2025-09-19 07:06:32 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:32.714703 | orchestrator | 2025-09-19 07:06:32 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:32.716859 | orchestrator | 2025-09-19 07:06:32 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:32.716989 | orchestrator | 2025-09-19 07:06:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:35.755758 | orchestrator | 2025-09-19 07:06:35 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:35.756388 | orchestrator | 2025-09-19 07:06:35 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:35.757624 | orchestrator | 2025-09-19 07:06:35 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:35.757867 | orchestrator | 2025-09-19 07:06:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:38.786567 | orchestrator | 2025-09-19 07:06:38 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:38.787751 | orchestrator | 2025-09-19 07:06:38 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:38.788124 | orchestrator | 2025-09-19 07:06:38 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:38.788363 | orchestrator | 2025-09-19 07:06:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:41.869918 | orchestrator | 2025-09-19 07:06:41 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:41.870413 | orchestrator | 2025-09-19 07:06:41 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:41.871783 | orchestrator | 2025-09-19 07:06:41 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:41.871800 | orchestrator | 2025-09-19 07:06:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:44.897475 | orchestrator | 2025-09-19 07:06:44 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:44.897825 | orchestrator | 2025-09-19 07:06:44 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:44.898203 | orchestrator | 2025-09-19 07:06:44 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:44.898254 | orchestrator | 2025-09-19 07:06:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:47.948411 | orchestrator | 2025-09-19 07:06:47 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:47.948524 | orchestrator | 2025-09-19 07:06:47 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:47.948541 | orchestrator | 2025-09-19 07:06:47 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:47.948553 | orchestrator | 2025-09-19 07:06:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:50.967108 | orchestrator | 2025-09-19 07:06:50 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:50.967343 | orchestrator | 2025-09-19 07:06:50 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:50.968335 | orchestrator | 2025-09-19 07:06:50 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state STARTED 2025-09-19 07:06:50.968738 | orchestrator | 2025-09-19 07:06:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:53.997029 | orchestrator | 2025-09-19 07:06:53 | INFO  | Task f21d7d6e-9934-44db-82b9-bae966a4256b is in state STARTED 2025-09-19 07:06:53.997118 | orchestrator | 2025-09-19 07:06:53 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:53.997133 | orchestrator | 2025-09-19 07:06:53 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:54.000153 | orchestrator | 2025-09-19 07:06:53 | INFO  | Task 9461d3d5-abcf-466c-ac53-bf11f6569ac1 is in state SUCCESS 2025-09-19 07:06:54.002396 | orchestrator | 2025-09-19 07:06:54.002432 | orchestrator | 2025-09-19 07:06:54.002445 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-19 07:06:54.002458 | orchestrator | 2025-09-19 07:06:54.002470 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-19 07:06:54.002482 | orchestrator | Friday 19 September 2025 07:00:10 +0000 (0:00:00.256) 0:00:00.256 ****** 2025-09-19 07:06:54.002495 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:06:54.002508 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:06:54.002520 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:06:54.002532 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.002545 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.002556 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.002568 | orchestrator | 2025-09-19 07:06:54.002580 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-19 07:06:54.002593 | orchestrator | Friday 19 September 2025 07:00:11 +0000 (0:00:00.674) 0:00:00.930 ****** 2025-09-19 07:06:54.002605 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.002618 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.002630 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.002641 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.002664 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.002675 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.002687 | orchestrator | 2025-09-19 07:06:54.002698 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-19 07:06:54.002710 | orchestrator | Friday 19 September 2025 07:00:11 +0000 (0:00:00.545) 0:00:01.476 ****** 2025-09-19 07:06:54.002721 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.002733 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.002744 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.002779 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.002791 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.002802 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.002814 | orchestrator | 2025-09-19 07:06:54.002825 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-19 07:06:54.002836 | orchestrator | Friday 19 September 2025 07:00:12 +0000 (0:00:00.568) 0:00:02.044 ****** 2025-09-19 07:06:54.002848 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:06:54.002859 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:06:54.002870 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:06:54.002881 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.002892 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.002903 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.002914 | orchestrator | 2025-09-19 07:06:54.002925 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-19 07:06:54.002936 | orchestrator | Friday 19 September 2025 07:00:14 +0000 (0:00:02.219) 0:00:04.264 ****** 2025-09-19 07:06:54.002947 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:06:54.002959 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:06:54.002970 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:06:54.002981 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.002992 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.003003 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.003014 | orchestrator | 2025-09-19 07:06:54.003025 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-19 07:06:54.003036 | orchestrator | Friday 19 September 2025 07:00:15 +0000 (0:00:01.168) 0:00:05.432 ****** 2025-09-19 07:06:54.003048 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:06:54.003059 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:06:54.003070 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:06:54.003081 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.003092 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.003103 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.003114 | orchestrator | 2025-09-19 07:06:54.003125 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-19 07:06:54.003137 | orchestrator | Friday 19 September 2025 07:00:16 +0000 (0:00:01.104) 0:00:06.536 ****** 2025-09-19 07:06:54.003148 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.003159 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.003170 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.003181 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.003192 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.003232 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.003244 | orchestrator | 2025-09-19 07:06:54.003255 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-19 07:06:54.003267 | orchestrator | Friday 19 September 2025 07:00:17 +0000 (0:00:00.753) 0:00:07.290 ****** 2025-09-19 07:06:54.003278 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.003289 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.003301 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.003312 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.003323 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.003334 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.003345 | orchestrator | 2025-09-19 07:06:54.003356 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-19 07:06:54.003368 | orchestrator | Friday 19 September 2025 07:00:18 +0000 (0:00:00.686) 0:00:07.976 ****** 2025-09-19 07:06:54.003379 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:06:54.003390 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:06:54.003402 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.003413 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:06:54.003432 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:06:54.003443 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.003455 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:06:54.003466 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:06:54.003477 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.003488 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:06:54.003511 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:06:54.003523 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.003534 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:06:54.003545 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:06:54.003556 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.003568 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:06:54.003579 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:06:54.003591 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.003602 | orchestrator | 2025-09-19 07:06:54.003613 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-19 07:06:54.003624 | orchestrator | Friday 19 September 2025 07:00:19 +0000 (0:00:00.857) 0:00:08.833 ****** 2025-09-19 07:06:54.003635 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.003646 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.003657 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.003669 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.003680 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.003691 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.003702 | orchestrator | 2025-09-19 07:06:54.003713 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-19 07:06:54.003725 | orchestrator | Friday 19 September 2025 07:00:20 +0000 (0:00:01.231) 0:00:10.065 ****** 2025-09-19 07:06:54.003736 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:06:54.003748 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:06:54.003759 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:06:54.003770 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.003781 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.003792 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.003803 | orchestrator | 2025-09-19 07:06:54.003814 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-19 07:06:54.003826 | orchestrator | Friday 19 September 2025 07:00:21 +0000 (0:00:00.726) 0:00:10.791 ****** 2025-09-19 07:06:54.003837 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:06:54.003848 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.003859 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.003870 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.003881 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:06:54.003892 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:06:54.003903 | orchestrator | 2025-09-19 07:06:54.003914 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-19 07:06:54.003926 | orchestrator | Friday 19 September 2025 07:01:31 +0000 (0:01:10.307) 0:01:21.099 ****** 2025-09-19 07:06:54.003937 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.003948 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.003959 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.003969 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.003981 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.003992 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.004003 | orchestrator | 2025-09-19 07:06:54.004014 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-19 07:06:54.004032 | orchestrator | Friday 19 September 2025 07:01:32 +0000 (0:00:01.354) 0:01:22.453 ****** 2025-09-19 07:06:54.004043 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.004054 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.004065 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.004076 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.004087 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.004098 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.004109 | orchestrator | 2025-09-19 07:06:54.004121 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-19 07:06:54.004133 | orchestrator | Friday 19 September 2025 07:01:33 +0000 (0:00:01.078) 0:01:23.531 ****** 2025-09-19 07:06:54.004144 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:06:54.004156 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:06:54.004167 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:06:54.004178 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.004194 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.004218 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.004230 | orchestrator | 2025-09-19 07:06:54.004241 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-19 07:06:54.004253 | orchestrator | Friday 19 September 2025 07:01:34 +0000 (0:00:00.568) 0:01:24.099 ****** 2025-09-19 07:06:54.004264 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-19 07:06:54.004276 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-19 07:06:54.004307 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-19 07:06:54.004319 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-19 07:06:54.004330 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-19 07:06:54.004342 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-19 07:06:54.004353 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-19 07:06:54.004364 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-19 07:06:54.004375 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-19 07:06:54.004387 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-19 07:06:54.004398 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-19 07:06:54.004409 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-19 07:06:54.004420 | orchestrator | 2025-09-19 07:06:54.004431 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-19 07:06:54.004443 | orchestrator | Friday 19 September 2025 07:01:35 +0000 (0:00:01.312) 0:01:25.412 ****** 2025-09-19 07:06:54.004454 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:06:54.004465 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:06:54.004476 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:06:54.004487 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.004498 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.004509 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.004520 | orchestrator | 2025-09-19 07:06:54.004538 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-19 07:06:54.004550 | orchestrator | 2025-09-19 07:06:54.004561 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-19 07:06:54.004572 | orchestrator | Friday 19 September 2025 07:01:37 +0000 (0:00:01.348) 0:01:26.761 ****** 2025-09-19 07:06:54.004583 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.004595 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.004606 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.004617 | orchestrator | 2025-09-19 07:06:54.004628 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-19 07:06:54.004639 | orchestrator | Friday 19 September 2025 07:01:37 +0000 (0:00:00.645) 0:01:27.407 ****** 2025-09-19 07:06:54.004650 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.004661 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.004679 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.004690 | orchestrator | 2025-09-19 07:06:54.004701 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-19 07:06:54.004712 | orchestrator | Friday 19 September 2025 07:01:38 +0000 (0:00:01.016) 0:01:28.423 ****** 2025-09-19 07:06:54.004723 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.004734 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.004745 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.004756 | orchestrator | 2025-09-19 07:06:54.004767 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-19 07:06:54.004778 | orchestrator | Friday 19 September 2025 07:01:39 +0000 (0:00:01.016) 0:01:29.440 ****** 2025-09-19 07:06:54.004789 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.004800 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.004811 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.004822 | orchestrator | 2025-09-19 07:06:54.004833 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-19 07:06:54.004844 | orchestrator | Friday 19 September 2025 07:01:40 +0000 (0:00:00.732) 0:01:30.173 ****** 2025-09-19 07:06:54.004855 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.004867 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.004878 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.004889 | orchestrator | 2025-09-19 07:06:54.004900 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-19 07:06:54.004911 | orchestrator | Friday 19 September 2025 07:01:40 +0000 (0:00:00.272) 0:01:30.446 ****** 2025-09-19 07:06:54.004922 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.004933 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.004944 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.004955 | orchestrator | 2025-09-19 07:06:54.004966 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-19 07:06:54.004977 | orchestrator | Friday 19 September 2025 07:01:41 +0000 (0:00:00.564) 0:01:31.011 ****** 2025-09-19 07:06:54.004988 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.004999 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.005010 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.005021 | orchestrator | 2025-09-19 07:06:54.005032 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-19 07:06:54.005043 | orchestrator | Friday 19 September 2025 07:01:42 +0000 (0:00:01.240) 0:01:32.252 ****** 2025-09-19 07:06:54.005054 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:06:54.005065 | orchestrator | 2025-09-19 07:06:54.005076 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-19 07:06:54.005087 | orchestrator | Friday 19 September 2025 07:01:43 +0000 (0:00:00.478) 0:01:32.730 ****** 2025-09-19 07:06:54.005098 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.005109 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.005121 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.005132 | orchestrator | 2025-09-19 07:06:54.005143 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-19 07:06:54.005154 | orchestrator | Friday 19 September 2025 07:01:44 +0000 (0:00:01.114) 0:01:33.845 ****** 2025-09-19 07:06:54.005165 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.005176 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.005187 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.005237 | orchestrator | 2025-09-19 07:06:54.005255 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-19 07:06:54.005267 | orchestrator | Friday 19 September 2025 07:01:44 +0000 (0:00:00.519) 0:01:34.365 ****** 2025-09-19 07:06:54.005278 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.005289 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.005300 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.005311 | orchestrator | 2025-09-19 07:06:54.005322 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-19 07:06:54.005340 | orchestrator | Friday 19 September 2025 07:01:45 +0000 (0:00:00.735) 0:01:35.101 ****** 2025-09-19 07:06:54.005351 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.005362 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.005373 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.005384 | orchestrator | 2025-09-19 07:06:54.005395 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-19 07:06:54.005406 | orchestrator | Friday 19 September 2025 07:01:47 +0000 (0:00:01.576) 0:01:36.678 ****** 2025-09-19 07:06:54.005417 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.005428 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.005439 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.005450 | orchestrator | 2025-09-19 07:06:54.005471 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-19 07:06:54.005491 | orchestrator | Friday 19 September 2025 07:01:47 +0000 (0:00:00.773) 0:01:37.451 ****** 2025-09-19 07:06:54.005512 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.005531 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.005550 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.005569 | orchestrator | 2025-09-19 07:06:54.005590 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-19 07:06:54.005610 | orchestrator | Friday 19 September 2025 07:01:48 +0000 (0:00:00.918) 0:01:38.369 ****** 2025-09-19 07:06:54.005632 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.005652 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.005666 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.005677 | orchestrator | 2025-09-19 07:06:54.005696 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-19 07:06:54.005708 | orchestrator | Friday 19 September 2025 07:01:50 +0000 (0:00:02.119) 0:01:40.489 ****** 2025-09-19 07:06:54.005720 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 07:06:54.005732 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 07:06:54.005743 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 07:06:54.005754 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 07:06:54.005765 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 07:06:54.005776 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 07:06:54.005787 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 07:06:54.005798 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 07:06:54.005809 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 07:06:54.005820 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 07:06:54.005831 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 07:06:54.005843 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 07:06:54.005853 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 07:06:54.005872 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 07:06:54.005884 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 07:06:54.005895 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.005906 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.005917 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.005928 | orchestrator | 2025-09-19 07:06:54.005939 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-19 07:06:54.005950 | orchestrator | Friday 19 September 2025 07:02:46 +0000 (0:00:55.596) 0:02:36.085 ****** 2025-09-19 07:06:54.005961 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.005972 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.005983 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.005994 | orchestrator | 2025-09-19 07:06:54.006011 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-19 07:06:54.006073 | orchestrator | Friday 19 September 2025 07:02:46 +0000 (0:00:00.356) 0:02:36.442 ****** 2025-09-19 07:06:54.006084 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.006095 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.006107 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.006118 | orchestrator | 2025-09-19 07:06:54.006129 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-19 07:06:54.006140 | orchestrator | Friday 19 September 2025 07:02:47 +0000 (0:00:01.088) 0:02:37.531 ****** 2025-09-19 07:06:54.006151 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.006163 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.006173 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.006185 | orchestrator | 2025-09-19 07:06:54.006196 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-19 07:06:54.006223 | orchestrator | Friday 19 September 2025 07:02:49 +0000 (0:00:01.211) 0:02:38.743 ****** 2025-09-19 07:06:54.006234 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.006245 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.006256 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.006267 | orchestrator | 2025-09-19 07:06:54.006279 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-19 07:06:54.006290 | orchestrator | Friday 19 September 2025 07:03:15 +0000 (0:00:25.995) 0:03:04.739 ****** 2025-09-19 07:06:54.006301 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.006312 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.006323 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.006334 | orchestrator | 2025-09-19 07:06:54.006345 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-19 07:06:54.006356 | orchestrator | Friday 19 September 2025 07:03:16 +0000 (0:00:00.927) 0:03:05.667 ****** 2025-09-19 07:06:54.006368 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.006379 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.006390 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.006401 | orchestrator | 2025-09-19 07:06:54.006419 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-19 07:06:54.006430 | orchestrator | Friday 19 September 2025 07:03:17 +0000 (0:00:01.541) 0:03:07.208 ****** 2025-09-19 07:06:54.006441 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.006453 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.006464 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.006475 | orchestrator | 2025-09-19 07:06:54.006486 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-19 07:06:54.006497 | orchestrator | Friday 19 September 2025 07:03:18 +0000 (0:00:00.851) 0:03:08.060 ****** 2025-09-19 07:06:54.006508 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.006519 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.006557 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.006569 | orchestrator | 2025-09-19 07:06:54.006581 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-19 07:06:54.006592 | orchestrator | Friday 19 September 2025 07:03:19 +0000 (0:00:00.853) 0:03:08.914 ****** 2025-09-19 07:06:54.006603 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.006614 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.006625 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.006636 | orchestrator | 2025-09-19 07:06:54.006648 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-19 07:06:54.006659 | orchestrator | Friday 19 September 2025 07:03:19 +0000 (0:00:00.289) 0:03:09.203 ****** 2025-09-19 07:06:54.006670 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.006681 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.006692 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.006703 | orchestrator | 2025-09-19 07:06:54.006715 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-19 07:06:54.006726 | orchestrator | Friday 19 September 2025 07:03:20 +0000 (0:00:00.723) 0:03:09.927 ****** 2025-09-19 07:06:54.006737 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.006748 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.006759 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.006770 | orchestrator | 2025-09-19 07:06:54.006782 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-19 07:06:54.006793 | orchestrator | Friday 19 September 2025 07:03:21 +0000 (0:00:00.748) 0:03:10.675 ****** 2025-09-19 07:06:54.006804 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.006815 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.006826 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.006837 | orchestrator | 2025-09-19 07:06:54.006848 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-19 07:06:54.006860 | orchestrator | Friday 19 September 2025 07:03:21 +0000 (0:00:00.876) 0:03:11.552 ****** 2025-09-19 07:06:54.006871 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:54.006882 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:54.006904 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:54.006915 | orchestrator | 2025-09-19 07:06:54.006927 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-19 07:06:54.006938 | orchestrator | Friday 19 September 2025 07:03:22 +0000 (0:00:00.961) 0:03:12.514 ****** 2025-09-19 07:06:54.006949 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.006960 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.006971 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.006982 | orchestrator | 2025-09-19 07:06:54.006993 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-19 07:06:54.007005 | orchestrator | Friday 19 September 2025 07:03:23 +0000 (0:00:00.750) 0:03:13.265 ****** 2025-09-19 07:06:54.007016 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.007027 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.007038 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.007049 | orchestrator | 2025-09-19 07:06:54.007061 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-19 07:06:54.007072 | orchestrator | Friday 19 September 2025 07:03:24 +0000 (0:00:00.324) 0:03:13.590 ****** 2025-09-19 07:06:54.007083 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.007094 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.007105 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.007116 | orchestrator | 2025-09-19 07:06:54.007127 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-19 07:06:54.007143 | orchestrator | Friday 19 September 2025 07:03:24 +0000 (0:00:00.800) 0:03:14.390 ****** 2025-09-19 07:06:54.007155 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.007166 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.007187 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.007213 | orchestrator | 2025-09-19 07:06:54.007232 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-19 07:06:54.007243 | orchestrator | Friday 19 September 2025 07:03:25 +0000 (0:00:00.994) 0:03:15.384 ****** 2025-09-19 07:06:54.007255 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 07:06:54.007266 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 07:06:54.007277 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 07:06:54.007288 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 07:06:54.007299 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 07:06:54.007310 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 07:06:54.007321 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 07:06:54.007332 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 07:06:54.007344 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 07:06:54.007361 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-19 07:06:54.007372 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 07:06:54.007383 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 07:06:54.007394 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-19 07:06:54.007406 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 07:06:54.007417 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 07:06:54.007428 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 07:06:54.007438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 07:06:54.007450 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 07:06:54.007461 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 07:06:54.007472 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 07:06:54.007483 | orchestrator | 2025-09-19 07:06:54.007494 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-19 07:06:54.007505 | orchestrator | 2025-09-19 07:06:54.007516 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-19 07:06:54.007528 | orchestrator | Friday 19 September 2025 07:03:29 +0000 (0:00:03.510) 0:03:18.895 ****** 2025-09-19 07:06:54.007538 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:06:54.007550 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:06:54.007561 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:06:54.007572 | orchestrator | 2025-09-19 07:06:54.007583 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-19 07:06:54.007594 | orchestrator | Friday 19 September 2025 07:03:29 +0000 (0:00:00.544) 0:03:19.439 ****** 2025-09-19 07:06:54.007605 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:06:54.007616 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:06:54.007627 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:06:54.007638 | orchestrator | 2025-09-19 07:06:54.007649 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-19 07:06:54.007660 | orchestrator | Friday 19 September 2025 07:03:30 +0000 (0:00:00.659) 0:03:20.099 ****** 2025-09-19 07:06:54.007678 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:06:54.007689 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:06:54.007700 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:06:54.007712 | orchestrator | 2025-09-19 07:06:54.007723 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-19 07:06:54.007734 | orchestrator | Friday 19 September 2025 07:03:31 +0000 (0:00:00.587) 0:03:20.687 ****** 2025-09-19 07:06:54.007745 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:06:54.007756 | orchestrator | 2025-09-19 07:06:54.007767 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-19 07:06:54.007779 | orchestrator | Friday 19 September 2025 07:03:31 +0000 (0:00:00.525) 0:03:21.212 ****** 2025-09-19 07:06:54.007790 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.007801 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.007812 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.007823 | orchestrator | 2025-09-19 07:06:54.007834 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-19 07:06:54.007845 | orchestrator | Friday 19 September 2025 07:03:32 +0000 (0:00:00.426) 0:03:21.639 ****** 2025-09-19 07:06:54.007856 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.007867 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.007878 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.007889 | orchestrator | 2025-09-19 07:06:54.007910 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-19 07:06:54.007921 | orchestrator | Friday 19 September 2025 07:03:32 +0000 (0:00:00.650) 0:03:22.289 ****** 2025-09-19 07:06:54.007933 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.007944 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.007955 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.007966 | orchestrator | 2025-09-19 07:06:54.007977 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-19 07:06:54.007988 | orchestrator | Friday 19 September 2025 07:03:33 +0000 (0:00:00.470) 0:03:22.760 ****** 2025-09-19 07:06:54.007999 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:06:54.008023 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:06:54.008035 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:06:54.008046 | orchestrator | 2025-09-19 07:06:54.008057 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-19 07:06:54.008068 | orchestrator | Friday 19 September 2025 07:03:34 +0000 (0:00:00.887) 0:03:23.648 ****** 2025-09-19 07:06:54.008079 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:06:54.008090 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:06:54.008101 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:06:54.008112 | orchestrator | 2025-09-19 07:06:54.008123 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-19 07:06:54.008134 | orchestrator | Friday 19 September 2025 07:03:35 +0000 (0:00:01.228) 0:03:24.876 ****** 2025-09-19 07:06:54.008145 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:06:54.008157 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:06:54.008168 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:06:54.008179 | orchestrator | 2025-09-19 07:06:54.008190 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-19 07:06:54.008245 | orchestrator | Friday 19 September 2025 07:03:37 +0000 (0:00:02.021) 0:03:26.898 ****** 2025-09-19 07:06:54.008258 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:06:54.008269 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:06:54.008281 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:06:54.008292 | orchestrator | 2025-09-19 07:06:54.008309 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 07:06:54.008321 | orchestrator | 2025-09-19 07:06:54.008332 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 07:06:54.008343 | orchestrator | Friday 19 September 2025 07:03:49 +0000 (0:00:12.113) 0:03:39.011 ****** 2025-09-19 07:06:54.008362 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:54.008373 | orchestrator | 2025-09-19 07:06:54.008397 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 07:06:54.008408 | orchestrator | Friday 19 September 2025 07:03:50 +0000 (0:00:00.751) 0:03:39.763 ****** 2025-09-19 07:06:54.008419 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:54.008431 | orchestrator | 2025-09-19 07:06:54.008442 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 07:06:54.008453 | orchestrator | Friday 19 September 2025 07:03:50 +0000 (0:00:00.441) 0:03:40.205 ****** 2025-09-19 07:06:54.008464 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 07:06:54.008475 | orchestrator | 2025-09-19 07:06:54.008487 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 07:06:54.008498 | orchestrator | Friday 19 September 2025 07:03:51 +0000 (0:00:00.584) 0:03:40.789 ****** 2025-09-19 07:06:54.008509 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:54.008520 | orchestrator | 2025-09-19 07:06:54.008531 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 07:06:54.008542 | orchestrator | Friday 19 September 2025 07:03:52 +0000 (0:00:00.852) 0:03:41.642 ****** 2025-09-19 07:06:54.008553 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:54.008564 | orchestrator | 2025-09-19 07:06:54.008575 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 07:06:54.008597 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:01.180) 0:03:42.822 ****** 2025-09-19 07:06:54.008608 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 07:06:54.008620 | orchestrator | 2025-09-19 07:06:54.008631 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 07:06:54.008642 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:01.751) 0:03:44.573 ****** 2025-09-19 07:06:54.008653 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 07:06:54.008664 | orchestrator | 2025-09-19 07:06:54.008676 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 07:06:54.008687 | orchestrator | Friday 19 September 2025 07:03:55 +0000 (0:00:00.928) 0:03:45.502 ****** 2025-09-19 07:06:54.008698 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:54.008709 | orchestrator | 2025-09-19 07:06:54.008720 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 07:06:54.008732 | orchestrator | Friday 19 September 2025 07:03:56 +0000 (0:00:00.500) 0:03:46.003 ****** 2025-09-19 07:06:54.008743 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:54.008754 | orchestrator | 2025-09-19 07:06:54.008765 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-19 07:06:54.008776 | orchestrator | 2025-09-19 07:06:54.008787 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-19 07:06:54.008806 | orchestrator | Friday 19 September 2025 07:03:56 +0000 (0:00:00.463) 0:03:46.466 ****** 2025-09-19 07:06:54.008817 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:54.008827 | orchestrator | 2025-09-19 07:06:54.008837 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-19 07:06:54.008846 | orchestrator | Friday 19 September 2025 07:03:57 +0000 (0:00:00.152) 0:03:46.619 ****** 2025-09-19 07:06:54.008856 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 07:06:54.008866 | orchestrator | 2025-09-19 07:06:54.008876 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-19 07:06:54.008886 | orchestrator | Friday 19 September 2025 07:03:57 +0000 (0:00:00.260) 0:03:46.880 ****** 2025-09-19 07:06:54.008896 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:54.008905 | orchestrator | 2025-09-19 07:06:54.008915 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-19 07:06:54.008930 | orchestrator | Friday 19 September 2025 07:03:58 +0000 (0:00:00.989) 0:03:47.869 ****** 2025-09-19 07:06:54.008940 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:54.008955 | orchestrator | 2025-09-19 07:06:54.008965 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-19 07:06:54.008975 | orchestrator | Friday 19 September 2025 07:04:00 +0000 (0:00:01.719) 0:03:49.588 ****** 2025-09-19 07:06:54.008985 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:54.008995 | orchestrator | 2025-09-19 07:06:54.009005 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-19 07:06:54.009015 | orchestrator | Friday 19 September 2025 07:04:01 +0000 (0:00:01.262) 0:03:50.851 ****** 2025-09-19 07:06:54.009025 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:54.009035 | orchestrator | 2025-09-19 07:06:54.009045 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-19 07:06:54.009055 | orchestrator | Friday 19 September 2025 07:04:01 +0000 (0:00:00.465) 0:03:51.317 ****** 2025-09-19 07:06:54.009065 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:54.009075 | orchestrator | 2025-09-19 07:06:54.009085 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-19 07:06:54.009095 | orchestrator | Friday 19 September 2025 07:04:09 +0000 (0:00:07.336) 0:03:58.654 ****** 2025-09-19 07:06:54.009104 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:54.009114 | orchestrator | 2025-09-19 07:06:54.009124 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-19 07:06:54.009134 | orchestrator | Friday 19 September 2025 07:04:20 +0000 (0:00:11.712) 0:04:10.366 ****** 2025-09-19 07:06:54.009144 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:54.009154 | orchestrator | 2025-09-19 07:06:54.009164 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-19 07:06:54.009174 | orchestrator | 2025-09-19 07:06:54.009184 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-19 07:06:54.009235 | orchestrator | Friday 19 September 2025 07:04:21 +0000 (0:00:00.549) 0:04:10.916 ****** 2025-09-19 07:06:54.009247 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.009257 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.009267 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.009276 | orchestrator | 2025-09-19 07:06:54.009287 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-19 07:06:54.009296 | orchestrator | Friday 19 September 2025 07:04:21 +0000 (0:00:00.553) 0:04:11.469 ****** 2025-09-19 07:06:54.009306 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009316 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.009326 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.009336 | orchestrator | 2025-09-19 07:06:54.009346 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-19 07:06:54.009367 | orchestrator | Friday 19 September 2025 07:04:22 +0000 (0:00:00.351) 0:04:11.821 ****** 2025-09-19 07:06:54.009378 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:06:54.009388 | orchestrator | 2025-09-19 07:06:54.009398 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-19 07:06:54.009408 | orchestrator | Friday 19 September 2025 07:04:22 +0000 (0:00:00.615) 0:04:12.436 ****** 2025-09-19 07:06:54.009418 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009428 | orchestrator | 2025-09-19 07:06:54.009438 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-19 07:06:54.009448 | orchestrator | Friday 19 September 2025 07:04:23 +0000 (0:00:00.207) 0:04:12.643 ****** 2025-09-19 07:06:54.009458 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009467 | orchestrator | 2025-09-19 07:06:54.009477 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-19 07:06:54.009488 | orchestrator | Friday 19 September 2025 07:04:23 +0000 (0:00:00.197) 0:04:12.841 ****** 2025-09-19 07:06:54.009498 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009508 | orchestrator | 2025-09-19 07:06:54.009518 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-19 07:06:54.009534 | orchestrator | Friday 19 September 2025 07:04:23 +0000 (0:00:00.702) 0:04:13.544 ****** 2025-09-19 07:06:54.009544 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009554 | orchestrator | 2025-09-19 07:06:54.009563 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-19 07:06:54.009573 | orchestrator | Friday 19 September 2025 07:04:24 +0000 (0:00:00.211) 0:04:13.755 ****** 2025-09-19 07:06:54.009583 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009593 | orchestrator | 2025-09-19 07:06:54.009603 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-19 07:06:54.009613 | orchestrator | Friday 19 September 2025 07:04:24 +0000 (0:00:00.220) 0:04:13.976 ****** 2025-09-19 07:06:54.009623 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009633 | orchestrator | 2025-09-19 07:06:54.009643 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-19 07:06:54.009652 | orchestrator | Friday 19 September 2025 07:04:24 +0000 (0:00:00.202) 0:04:14.178 ****** 2025-09-19 07:06:54.009660 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009669 | orchestrator | 2025-09-19 07:06:54.009677 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-19 07:06:54.009685 | orchestrator | Friday 19 September 2025 07:04:24 +0000 (0:00:00.210) 0:04:14.389 ****** 2025-09-19 07:06:54.009702 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009710 | orchestrator | 2025-09-19 07:06:54.009718 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-19 07:06:54.009726 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.264) 0:04:14.653 ****** 2025-09-19 07:06:54.009734 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009742 | orchestrator | 2025-09-19 07:06:54.009751 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-19 07:06:54.009759 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.220) 0:04:14.874 ****** 2025-09-19 07:06:54.009767 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-19 07:06:54.009779 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-19 07:06:54.009787 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009795 | orchestrator | 2025-09-19 07:06:54.009803 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-19 07:06:54.009811 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.312) 0:04:15.186 ****** 2025-09-19 07:06:54.009819 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009828 | orchestrator | 2025-09-19 07:06:54.009836 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-19 07:06:54.009844 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.236) 0:04:15.422 ****** 2025-09-19 07:06:54.009852 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009860 | orchestrator | 2025-09-19 07:06:54.009868 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-19 07:06:54.009876 | orchestrator | Friday 19 September 2025 07:04:26 +0000 (0:00:00.227) 0:04:15.650 ****** 2025-09-19 07:06:54.009884 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009892 | orchestrator | 2025-09-19 07:06:54.009909 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-19 07:06:54.009918 | orchestrator | Friday 19 September 2025 07:04:26 +0000 (0:00:00.232) 0:04:15.882 ****** 2025-09-19 07:06:54.009926 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009934 | orchestrator | 2025-09-19 07:06:54.009942 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-19 07:06:54.009950 | orchestrator | Friday 19 September 2025 07:04:26 +0000 (0:00:00.311) 0:04:16.194 ****** 2025-09-19 07:06:54.009958 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.009967 | orchestrator | 2025-09-19 07:06:54.009975 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-19 07:06:54.009983 | orchestrator | Friday 19 September 2025 07:04:26 +0000 (0:00:00.196) 0:04:16.390 ****** 2025-09-19 07:06:54.009995 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010003 | orchestrator | 2025-09-19 07:06:54.010012 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-19 07:06:54.010044 | orchestrator | Friday 19 September 2025 07:04:27 +0000 (0:00:00.738) 0:04:17.128 ****** 2025-09-19 07:06:54.010053 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010061 | orchestrator | 2025-09-19 07:06:54.010069 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-19 07:06:54.010077 | orchestrator | Friday 19 September 2025 07:04:27 +0000 (0:00:00.207) 0:04:17.336 ****** 2025-09-19 07:06:54.010085 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010093 | orchestrator | 2025-09-19 07:06:54.010101 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-19 07:06:54.010109 | orchestrator | Friday 19 September 2025 07:04:27 +0000 (0:00:00.206) 0:04:17.543 ****** 2025-09-19 07:06:54.010117 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010126 | orchestrator | 2025-09-19 07:06:54.010134 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-19 07:06:54.010142 | orchestrator | Friday 19 September 2025 07:04:28 +0000 (0:00:00.218) 0:04:17.761 ****** 2025-09-19 07:06:54.010150 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010158 | orchestrator | 2025-09-19 07:06:54.010166 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-19 07:06:54.010174 | orchestrator | Friday 19 September 2025 07:04:28 +0000 (0:00:00.218) 0:04:17.979 ****** 2025-09-19 07:06:54.010182 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010190 | orchestrator | 2025-09-19 07:06:54.010211 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-19 07:06:54.010220 | orchestrator | Friday 19 September 2025 07:04:28 +0000 (0:00:00.219) 0:04:18.199 ****** 2025-09-19 07:06:54.010228 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-19 07:06:54.010245 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-19 07:06:54.010254 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-19 07:06:54.010262 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-19 07:06:54.010270 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010278 | orchestrator | 2025-09-19 07:06:54.010286 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-19 07:06:54.010295 | orchestrator | Friday 19 September 2025 07:04:29 +0000 (0:00:00.455) 0:04:18.654 ****** 2025-09-19 07:06:54.010303 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010311 | orchestrator | 2025-09-19 07:06:54.010319 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-19 07:06:54.010327 | orchestrator | Friday 19 September 2025 07:04:29 +0000 (0:00:00.212) 0:04:18.867 ****** 2025-09-19 07:06:54.010335 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010343 | orchestrator | 2025-09-19 07:06:54.010352 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-19 07:06:54.010360 | orchestrator | Friday 19 September 2025 07:04:29 +0000 (0:00:00.252) 0:04:19.120 ****** 2025-09-19 07:06:54.010368 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010376 | orchestrator | 2025-09-19 07:06:54.010384 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-19 07:06:54.010392 | orchestrator | Friday 19 September 2025 07:04:29 +0000 (0:00:00.207) 0:04:19.327 ****** 2025-09-19 07:06:54.010401 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010409 | orchestrator | 2025-09-19 07:06:54.010417 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-19 07:06:54.010425 | orchestrator | Friday 19 September 2025 07:04:30 +0000 (0:00:00.263) 0:04:19.590 ****** 2025-09-19 07:06:54.010437 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-19 07:06:54.010452 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-19 07:06:54.010473 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010488 | orchestrator | 2025-09-19 07:06:54.010502 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-19 07:06:54.010514 | orchestrator | Friday 19 September 2025 07:04:30 +0000 (0:00:00.733) 0:04:20.324 ****** 2025-09-19 07:06:54.010522 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.010530 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.010538 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.010546 | orchestrator | 2025-09-19 07:06:54.010554 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-19 07:06:54.010562 | orchestrator | Friday 19 September 2025 07:04:31 +0000 (0:00:00.602) 0:04:20.927 ****** 2025-09-19 07:06:54.010570 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.010578 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.010587 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.010605 | orchestrator | 2025-09-19 07:06:54.010613 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-19 07:06:54.010622 | orchestrator | 2025-09-19 07:06:54.010630 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-19 07:06:54.010638 | orchestrator | Friday 19 September 2025 07:04:32 +0000 (0:00:00.929) 0:04:21.856 ****** 2025-09-19 07:06:54.010646 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:54.010654 | orchestrator | 2025-09-19 07:06:54.010662 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-19 07:06:54.010671 | orchestrator | Friday 19 September 2025 07:04:32 +0000 (0:00:00.253) 0:04:22.109 ****** 2025-09-19 07:06:54.010679 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 07:06:54.010687 | orchestrator | 2025-09-19 07:06:54.010695 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-19 07:06:54.010704 | orchestrator | Friday 19 September 2025 07:04:32 +0000 (0:00:00.427) 0:04:22.537 ****** 2025-09-19 07:06:54.010712 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:54.010720 | orchestrator | 2025-09-19 07:06:54.010728 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-19 07:06:54.010736 | orchestrator | 2025-09-19 07:06:54.010744 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-19 07:06:54.010782 | orchestrator | Friday 19 September 2025 07:06:39 +0000 (0:02:06.910) 0:06:29.447 ****** 2025-09-19 07:06:54.010792 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:06:54.010800 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:06:54.010808 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:06:54.010816 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:54.010824 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:54.010832 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:54.010840 | orchestrator | 2025-09-19 07:06:54.010848 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-19 07:06:54.010864 | orchestrator | Friday 19 September 2025 07:06:40 +0000 (0:00:00.621) 0:06:30.069 ****** 2025-09-19 07:06:54.010873 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 07:06:54.010881 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 07:06:54.010889 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 07:06:54.010897 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 07:06:54.010905 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 07:06:54.010914 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 07:06:54.010922 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 07:06:54.010930 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 07:06:54.010944 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 07:06:54.010952 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 07:06:54.010960 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 07:06:54.010968 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 07:06:54.010976 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 07:06:54.010984 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 07:06:54.010993 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 07:06:54.011001 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 07:06:54.011009 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 07:06:54.011017 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 07:06:54.011025 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 07:06:54.011033 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 07:06:54.011041 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 07:06:54.011049 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 07:06:54.011057 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 07:06:54.011066 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 07:06:54.011083 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 07:06:54.011094 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 07:06:54.011103 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 07:06:54.011111 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 07:06:54.011119 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 07:06:54.011127 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 07:06:54.011135 | orchestrator | 2025-09-19 07:06:54.011143 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-19 07:06:54.011151 | orchestrator | Friday 19 September 2025 07:06:49 +0000 (0:00:09.504) 0:06:39.574 ****** 2025-09-19 07:06:54.011159 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.011168 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.011176 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.011184 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.011192 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.011213 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.011221 | orchestrator | 2025-09-19 07:06:54.011229 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-19 07:06:54.011237 | orchestrator | Friday 19 September 2025 07:06:50 +0000 (0:00:00.487) 0:06:40.061 ****** 2025-09-19 07:06:54.011245 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:06:54.011254 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:06:54.011271 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:06:54.011279 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:54.011287 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:54.011295 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:54.011303 | orchestrator | 2025-09-19 07:06:54.011311 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:06:54.011329 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:06:54.011339 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-19 07:06:54.011347 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 07:06:54.011355 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 07:06:54.011364 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 07:06:54.011372 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 07:06:54.011380 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 07:06:54.011388 | orchestrator | 2025-09-19 07:06:54.011396 | orchestrator | 2025-09-19 07:06:54.011404 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:06:54.011412 | orchestrator | Friday 19 September 2025 07:06:51 +0000 (0:00:00.527) 0:06:40.588 ****** 2025-09-19 07:06:54.011421 | orchestrator | =============================================================================== 2025-09-19 07:06:54.011429 | orchestrator | k9s : Install k9s packages -------------------------------------------- 126.91s 2025-09-19 07:06:54.011437 | orchestrator | k3s_download : Download k3s binary x64 --------------------------------- 70.31s 2025-09-19 07:06:54.011445 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.60s 2025-09-19 07:06:54.011453 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.00s 2025-09-19 07:06:54.011461 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.11s 2025-09-19 07:06:54.011470 | orchestrator | kubectl : Install required packages ------------------------------------ 11.71s 2025-09-19 07:06:54.011478 | orchestrator | Manage labels ----------------------------------------------------------- 9.50s 2025-09-19 07:06:54.011486 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.34s 2025-09-19 07:06:54.011494 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.51s 2025-09-19 07:06:54.011502 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.22s 2025-09-19 07:06:54.011510 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.12s 2025-09-19 07:06:54.011518 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 2.02s 2025-09-19 07:06:54.011527 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.75s 2025-09-19 07:06:54.011535 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.72s 2025-09-19 07:06:54.011543 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.58s 2025-09-19 07:06:54.011551 | orchestrator | k3s_server : Register node-token file access mode ----------------------- 1.54s 2025-09-19 07:06:54.011559 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.35s 2025-09-19 07:06:54.011571 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.35s 2025-09-19 07:06:54.011579 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.31s 2025-09-19 07:06:54.011587 | orchestrator | kubectl : Add repository gpg key ---------------------------------------- 1.26s 2025-09-19 07:06:54.011595 | orchestrator | 2025-09-19 07:06:54 | INFO  | Task 5b56182c-823a-4d6a-b182-d33074dc7a98 is in state STARTED 2025-09-19 07:06:54.011608 | orchestrator | 2025-09-19 07:06:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:57.024718 | orchestrator | 2025-09-19 07:06:57 | INFO  | Task f21d7d6e-9934-44db-82b9-bae966a4256b is in state STARTED 2025-09-19 07:06:57.024814 | orchestrator | 2025-09-19 07:06:57 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:06:57.025560 | orchestrator | 2025-09-19 07:06:57 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:06:57.026970 | orchestrator | 2025-09-19 07:06:57 | INFO  | Task 5b56182c-823a-4d6a-b182-d33074dc7a98 is in state STARTED 2025-09-19 07:06:57.026994 | orchestrator | 2025-09-19 07:06:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:00.060269 | orchestrator | 2025-09-19 07:07:00 | INFO  | Task f21d7d6e-9934-44db-82b9-bae966a4256b is in state SUCCESS 2025-09-19 07:07:00.061408 | orchestrator | 2025-09-19 07:07:00 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:00.062384 | orchestrator | 2025-09-19 07:07:00 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:00.064478 | orchestrator | 2025-09-19 07:07:00 | INFO  | Task 5b56182c-823a-4d6a-b182-d33074dc7a98 is in state STARTED 2025-09-19 07:07:00.064531 | orchestrator | 2025-09-19 07:07:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:03.111029 | orchestrator | 2025-09-19 07:07:03 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:03.112854 | orchestrator | 2025-09-19 07:07:03 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:03.114844 | orchestrator | 2025-09-19 07:07:03 | INFO  | Task 5b56182c-823a-4d6a-b182-d33074dc7a98 is in state SUCCESS 2025-09-19 07:07:03.114951 | orchestrator | 2025-09-19 07:07:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:06.150894 | orchestrator | 2025-09-19 07:07:06 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:06.151446 | orchestrator | 2025-09-19 07:07:06 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:06.151477 | orchestrator | 2025-09-19 07:07:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:09.195442 | orchestrator | 2025-09-19 07:07:09 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:09.195804 | orchestrator | 2025-09-19 07:07:09 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:09.195834 | orchestrator | 2025-09-19 07:07:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:12.242896 | orchestrator | 2025-09-19 07:07:12 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:12.244670 | orchestrator | 2025-09-19 07:07:12 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:12.244938 | orchestrator | 2025-09-19 07:07:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:15.278699 | orchestrator | 2025-09-19 07:07:15 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:15.278789 | orchestrator | 2025-09-19 07:07:15 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:15.278959 | orchestrator | 2025-09-19 07:07:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:18.316617 | orchestrator | 2025-09-19 07:07:18 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:18.321314 | orchestrator | 2025-09-19 07:07:18 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:18.321393 | orchestrator | 2025-09-19 07:07:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:21.354254 | orchestrator | 2025-09-19 07:07:21 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:21.355333 | orchestrator | 2025-09-19 07:07:21 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:21.355430 | orchestrator | 2025-09-19 07:07:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:24.404749 | orchestrator | 2025-09-19 07:07:24 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:24.405295 | orchestrator | 2025-09-19 07:07:24 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:24.405354 | orchestrator | 2025-09-19 07:07:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:27.440713 | orchestrator | 2025-09-19 07:07:27 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:27.441098 | orchestrator | 2025-09-19 07:07:27 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:27.441130 | orchestrator | 2025-09-19 07:07:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:30.493922 | orchestrator | 2025-09-19 07:07:30 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:30.497120 | orchestrator | 2025-09-19 07:07:30 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:30.497156 | orchestrator | 2025-09-19 07:07:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:33.537641 | orchestrator | 2025-09-19 07:07:33 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:33.538352 | orchestrator | 2025-09-19 07:07:33 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:33.538372 | orchestrator | 2025-09-19 07:07:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:36.576425 | orchestrator | 2025-09-19 07:07:36 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:36.576723 | orchestrator | 2025-09-19 07:07:36 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:36.576749 | orchestrator | 2025-09-19 07:07:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:39.605813 | orchestrator | 2025-09-19 07:07:39 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:39.607551 | orchestrator | 2025-09-19 07:07:39 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:39.607666 | orchestrator | 2025-09-19 07:07:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:42.654263 | orchestrator | 2025-09-19 07:07:42 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:42.654359 | orchestrator | 2025-09-19 07:07:42 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:42.654373 | orchestrator | 2025-09-19 07:07:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:45.695437 | orchestrator | 2025-09-19 07:07:45 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:45.697305 | orchestrator | 2025-09-19 07:07:45 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:45.697386 | orchestrator | 2025-09-19 07:07:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:48.744235 | orchestrator | 2025-09-19 07:07:48 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:48.746187 | orchestrator | 2025-09-19 07:07:48 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:48.746262 | orchestrator | 2025-09-19 07:07:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:51.787705 | orchestrator | 2025-09-19 07:07:51 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:51.789399 | orchestrator | 2025-09-19 07:07:51 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:51.789431 | orchestrator | 2025-09-19 07:07:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:54.830203 | orchestrator | 2025-09-19 07:07:54 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:54.831772 | orchestrator | 2025-09-19 07:07:54 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:54.831805 | orchestrator | 2025-09-19 07:07:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:57.867427 | orchestrator | 2025-09-19 07:07:57 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:07:57.867675 | orchestrator | 2025-09-19 07:07:57 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:07:57.867704 | orchestrator | 2025-09-19 07:07:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:00.916957 | orchestrator | 2025-09-19 07:08:00 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:00.919610 | orchestrator | 2025-09-19 07:08:00 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:01.384313 | orchestrator | 2025-09-19 07:08:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:03.960093 | orchestrator | 2025-09-19 07:08:03 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:03.960845 | orchestrator | 2025-09-19 07:08:03 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:03.960887 | orchestrator | 2025-09-19 07:08:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:06.994676 | orchestrator | 2025-09-19 07:08:06 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:06.995505 | orchestrator | 2025-09-19 07:08:06 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:06.995537 | orchestrator | 2025-09-19 07:08:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:10.061478 | orchestrator | 2025-09-19 07:08:10 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:10.063332 | orchestrator | 2025-09-19 07:08:10 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:10.063825 | orchestrator | 2025-09-19 07:08:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:13.109074 | orchestrator | 2025-09-19 07:08:13 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:13.113369 | orchestrator | 2025-09-19 07:08:13 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:13.113408 | orchestrator | 2025-09-19 07:08:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:16.154856 | orchestrator | 2025-09-19 07:08:16 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:16.155335 | orchestrator | 2025-09-19 07:08:16 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:16.155364 | orchestrator | 2025-09-19 07:08:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:19.190617 | orchestrator | 2025-09-19 07:08:19 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:19.191742 | orchestrator | 2025-09-19 07:08:19 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:19.191876 | orchestrator | 2025-09-19 07:08:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:22.219332 | orchestrator | 2025-09-19 07:08:22 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:22.219879 | orchestrator | 2025-09-19 07:08:22 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:22.219917 | orchestrator | 2025-09-19 07:08:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:25.269830 | orchestrator | 2025-09-19 07:08:25 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:25.270597 | orchestrator | 2025-09-19 07:08:25 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:25.270630 | orchestrator | 2025-09-19 07:08:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:28.313246 | orchestrator | 2025-09-19 07:08:28 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:28.315030 | orchestrator | 2025-09-19 07:08:28 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:28.315198 | orchestrator | 2025-09-19 07:08:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:31.356315 | orchestrator | 2025-09-19 07:08:31 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:31.356505 | orchestrator | 2025-09-19 07:08:31 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:31.356526 | orchestrator | 2025-09-19 07:08:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:34.401752 | orchestrator | 2025-09-19 07:08:34 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:34.402560 | orchestrator | 2025-09-19 07:08:34 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:34.402595 | orchestrator | 2025-09-19 07:08:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:37.444485 | orchestrator | 2025-09-19 07:08:37 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:37.445981 | orchestrator | 2025-09-19 07:08:37 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:37.446603 | orchestrator | 2025-09-19 07:08:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:40.490480 | orchestrator | 2025-09-19 07:08:40 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:40.490606 | orchestrator | 2025-09-19 07:08:40 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:40.490631 | orchestrator | 2025-09-19 07:08:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:43.532624 | orchestrator | 2025-09-19 07:08:43 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:43.532722 | orchestrator | 2025-09-19 07:08:43 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:43.532745 | orchestrator | 2025-09-19 07:08:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:46.571460 | orchestrator | 2025-09-19 07:08:46 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:46.572381 | orchestrator | 2025-09-19 07:08:46 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:46.573338 | orchestrator | 2025-09-19 07:08:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:49.612264 | orchestrator | 2025-09-19 07:08:49 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:49.614789 | orchestrator | 2025-09-19 07:08:49 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:49.615174 | orchestrator | 2025-09-19 07:08:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:52.664702 | orchestrator | 2025-09-19 07:08:52 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:52.666735 | orchestrator | 2025-09-19 07:08:52 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:52.666856 | orchestrator | 2025-09-19 07:08:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:55.702857 | orchestrator | 2025-09-19 07:08:55 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:55.703344 | orchestrator | 2025-09-19 07:08:55 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:55.703629 | orchestrator | 2025-09-19 07:08:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:58.733773 | orchestrator | 2025-09-19 07:08:58 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state STARTED 2025-09-19 07:08:58.733985 | orchestrator | 2025-09-19 07:08:58 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:08:58.734005 | orchestrator | 2025-09-19 07:08:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:01.778543 | orchestrator | 2025-09-19 07:09:01 | INFO  | Task d2d7f871-772b-48e1-bf76-3bdd33c37a5f is in state SUCCESS 2025-09-19 07:09:01.779494 | orchestrator | 2025-09-19 07:09:01.779536 | orchestrator | 2025-09-19 07:09:01.779550 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-19 07:09:01.779560 | orchestrator | 2025-09-19 07:09:01.779568 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 07:09:01.779575 | orchestrator | Friday 19 September 2025 07:06:55 +0000 (0:00:00.149) 0:00:00.149 ****** 2025-09-19 07:09:01.779586 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 07:09:01.779598 | orchestrator | 2025-09-19 07:09:01.779609 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 07:09:01.779622 | orchestrator | Friday 19 September 2025 07:06:55 +0000 (0:00:00.779) 0:00:00.929 ****** 2025-09-19 07:09:01.779630 | orchestrator | changed: [testbed-manager] 2025-09-19 07:09:01.779637 | orchestrator | 2025-09-19 07:09:01.779644 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-19 07:09:01.779652 | orchestrator | Friday 19 September 2025 07:06:56 +0000 (0:00:01.010) 0:00:01.939 ****** 2025-09-19 07:09:01.779659 | orchestrator | changed: [testbed-manager] 2025-09-19 07:09:01.779666 | orchestrator | 2025-09-19 07:09:01.779673 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:09:01.779681 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:09:01.779689 | orchestrator | 2025-09-19 07:09:01.779696 | orchestrator | 2025-09-19 07:09:01.779704 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:09:01.779711 | orchestrator | Friday 19 September 2025 07:06:57 +0000 (0:00:00.355) 0:00:02.295 ****** 2025-09-19 07:09:01.779718 | orchestrator | =============================================================================== 2025-09-19 07:09:01.779725 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.01s 2025-09-19 07:09:01.779732 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2025-09-19 07:09:01.779759 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.36s 2025-09-19 07:09:01.779824 | orchestrator | 2025-09-19 07:09:01.779832 | orchestrator | 2025-09-19 07:09:01.779850 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 07:09:01.779933 | orchestrator | 2025-09-19 07:09:01.779943 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 07:09:01.779950 | orchestrator | Friday 19 September 2025 07:06:54 +0000 (0:00:00.186) 0:00:00.186 ****** 2025-09-19 07:09:01.779957 | orchestrator | ok: [testbed-manager] 2025-09-19 07:09:01.779965 | orchestrator | 2025-09-19 07:09:01.779972 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 07:09:01.780288 | orchestrator | Friday 19 September 2025 07:06:55 +0000 (0:00:00.532) 0:00:00.719 ****** 2025-09-19 07:09:01.780305 | orchestrator | ok: [testbed-manager] 2025-09-19 07:09:01.781097 | orchestrator | 2025-09-19 07:09:01.781123 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 07:09:01.781131 | orchestrator | Friday 19 September 2025 07:06:55 +0000 (0:00:00.584) 0:00:01.303 ****** 2025-09-19 07:09:01.781139 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 07:09:01.781146 | orchestrator | 2025-09-19 07:09:01.781153 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 07:09:01.781160 | orchestrator | Friday 19 September 2025 07:06:56 +0000 (0:00:00.587) 0:00:01.890 ****** 2025-09-19 07:09:01.781167 | orchestrator | changed: [testbed-manager] 2025-09-19 07:09:01.781174 | orchestrator | 2025-09-19 07:09:01.781181 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 07:09:01.781189 | orchestrator | Friday 19 September 2025 07:06:57 +0000 (0:00:01.019) 0:00:02.910 ****** 2025-09-19 07:09:01.781196 | orchestrator | changed: [testbed-manager] 2025-09-19 07:09:01.781203 | orchestrator | 2025-09-19 07:09:01.781210 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 07:09:01.781217 | orchestrator | Friday 19 September 2025 07:06:58 +0000 (0:00:00.717) 0:00:03.627 ****** 2025-09-19 07:09:01.781224 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 07:09:01.781231 | orchestrator | 2025-09-19 07:09:01.781238 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 07:09:01.781245 | orchestrator | Friday 19 September 2025 07:06:59 +0000 (0:00:01.453) 0:00:05.081 ****** 2025-09-19 07:09:01.781252 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 07:09:01.781259 | orchestrator | 2025-09-19 07:09:01.781265 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 07:09:01.781272 | orchestrator | Friday 19 September 2025 07:07:00 +0000 (0:00:00.850) 0:00:05.931 ****** 2025-09-19 07:09:01.781279 | orchestrator | ok: [testbed-manager] 2025-09-19 07:09:01.781286 | orchestrator | 2025-09-19 07:09:01.781293 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 07:09:01.781300 | orchestrator | Friday 19 September 2025 07:07:00 +0000 (0:00:00.424) 0:00:06.356 ****** 2025-09-19 07:09:01.781307 | orchestrator | ok: [testbed-manager] 2025-09-19 07:09:01.781314 | orchestrator | 2025-09-19 07:09:01.781321 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:09:01.781328 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:09:01.781336 | orchestrator | 2025-09-19 07:09:01.781343 | orchestrator | 2025-09-19 07:09:01.781350 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:09:01.781357 | orchestrator | Friday 19 September 2025 07:07:01 +0000 (0:00:00.309) 0:00:06.666 ****** 2025-09-19 07:09:01.781364 | orchestrator | =============================================================================== 2025-09-19 07:09:01.781371 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.45s 2025-09-19 07:09:01.781378 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.02s 2025-09-19 07:09:01.781395 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.85s 2025-09-19 07:09:01.781432 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.72s 2025-09-19 07:09:01.781440 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.59s 2025-09-19 07:09:01.781447 | orchestrator | Create .kube directory -------------------------------------------------- 0.58s 2025-09-19 07:09:01.781455 | orchestrator | Get home directory of operator user ------------------------------------- 0.53s 2025-09-19 07:09:01.781462 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.42s 2025-09-19 07:09:01.781469 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2025-09-19 07:09:01.781476 | orchestrator | 2025-09-19 07:09:01.781483 | orchestrator | 2025-09-19 07:09:01.781490 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:09:01.781497 | orchestrator | 2025-09-19 07:09:01.781503 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:09:01.781510 | orchestrator | Friday 19 September 2025 07:02:39 +0000 (0:00:00.452) 0:00:00.452 ****** 2025-09-19 07:09:01.781517 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.781524 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.781531 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.781538 | orchestrator | 2025-09-19 07:09:01.781545 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:09:01.781552 | orchestrator | Friday 19 September 2025 07:02:39 +0000 (0:00:00.595) 0:00:01.048 ****** 2025-09-19 07:09:01.781559 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-19 07:09:01.781566 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-19 07:09:01.781573 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-19 07:09:01.781579 | orchestrator | 2025-09-19 07:09:01.781586 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-19 07:09:01.781593 | orchestrator | 2025-09-19 07:09:01.781600 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 07:09:01.781607 | orchestrator | Friday 19 September 2025 07:02:40 +0000 (0:00:00.470) 0:00:01.518 ****** 2025-09-19 07:09:01.781618 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.781626 | orchestrator | 2025-09-19 07:09:01.781633 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-19 07:09:01.781639 | orchestrator | Friday 19 September 2025 07:02:41 +0000 (0:00:00.993) 0:00:02.511 ****** 2025-09-19 07:09:01.781646 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.781653 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.781660 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.781667 | orchestrator | 2025-09-19 07:09:01.781674 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 07:09:01.781681 | orchestrator | Friday 19 September 2025 07:02:42 +0000 (0:00:00.805) 0:00:03.317 ****** 2025-09-19 07:09:01.781688 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.781695 | orchestrator | 2025-09-19 07:09:01.781703 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-19 07:09:01.781711 | orchestrator | Friday 19 September 2025 07:02:43 +0000 (0:00:01.063) 0:00:04.381 ****** 2025-09-19 07:09:01.781719 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.781727 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.781735 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.781794 | orchestrator | 2025-09-19 07:09:01.781803 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-19 07:09:01.781812 | orchestrator | Friday 19 September 2025 07:02:43 +0000 (0:00:00.725) 0:00:05.107 ****** 2025-09-19 07:09:01.781820 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:09:01.781828 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:09:01.781843 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:09:01.781850 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:09:01.781857 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 07:09:01.781865 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 07:09:01.781872 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:09:01.781882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 07:09:01.781893 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 07:09:01.781905 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:09:01.781918 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 07:09:01.781929 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 07:09:01.782898 | orchestrator | 2025-09-19 07:09:01.782930 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 07:09:01.782938 | orchestrator | Friday 19 September 2025 07:02:47 +0000 (0:00:03.470) 0:00:08.577 ****** 2025-09-19 07:09:01.782946 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 07:09:01.782955 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 07:09:01.782963 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 07:09:01.782971 | orchestrator | 2025-09-19 07:09:01.782979 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 07:09:01.783049 | orchestrator | Friday 19 September 2025 07:02:48 +0000 (0:00:00.785) 0:00:09.362 ****** 2025-09-19 07:09:01.783083 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 07:09:01.783091 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 07:09:01.783098 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 07:09:01.783106 | orchestrator | 2025-09-19 07:09:01.783113 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 07:09:01.783121 | orchestrator | Friday 19 September 2025 07:02:49 +0000 (0:00:01.378) 0:00:10.741 ****** 2025-09-19 07:09:01.783129 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-19 07:09:01.783137 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.783144 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-19 07:09:01.783152 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.783159 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-19 07:09:01.783167 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.783174 | orchestrator | 2025-09-19 07:09:01.783182 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-19 07:09:01.783190 | orchestrator | Friday 19 September 2025 07:02:50 +0000 (0:00:00.585) 0:00:11.326 ****** 2025-09-19 07:09:01.783200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.783275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.783301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.783309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.783319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.783773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.783820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.783881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.783941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.783950 | orchestrator | 2025-09-19 07:09:01.783957 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-19 07:09:01.783966 | orchestrator | Friday 19 September 2025 07:02:52 +0000 (0:00:02.132) 0:00:13.458 ****** 2025-09-19 07:09:01.783973 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.783981 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.783989 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.783996 | orchestrator | 2025-09-19 07:09:01.784004 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-19 07:09:01.784011 | orchestrator | Friday 19 September 2025 07:02:53 +0000 (0:00:00.912) 0:00:14.371 ****** 2025-09-19 07:09:01.784019 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-19 07:09:01.784027 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-19 07:09:01.784034 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-19 07:09:01.784042 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-19 07:09:01.784049 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-19 07:09:01.784086 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-19 07:09:01.784094 | orchestrator | 2025-09-19 07:09:01.784102 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-19 07:09:01.784109 | orchestrator | Friday 19 September 2025 07:02:55 +0000 (0:00:02.132) 0:00:16.503 ****** 2025-09-19 07:09:01.784117 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.784125 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.784132 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.784140 | orchestrator | 2025-09-19 07:09:01.784147 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-19 07:09:01.784155 | orchestrator | Friday 19 September 2025 07:02:56 +0000 (0:00:01.181) 0:00:17.685 ****** 2025-09-19 07:09:01.784162 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.784170 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.784178 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.784394 | orchestrator | 2025-09-19 07:09:01.784403 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-19 07:09:01.784410 | orchestrator | Friday 19 September 2025 07:02:58 +0000 (0:00:01.742) 0:00:19.427 ****** 2025-09-19 07:09:01.784473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.784485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.784502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.784517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:09:01.784526 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.784534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.784542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.784550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.784635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.784687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.784704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.784712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:09:01.784721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:09:01.784729 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.784736 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.784744 | orchestrator | 2025-09-19 07:09:01.784752 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-19 07:09:01.784760 | orchestrator | Friday 19 September 2025 07:03:00 +0000 (0:00:01.798) 0:00:21.226 ****** 2025-09-19 07:09:01.784768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.784822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.784839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.784851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.784860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.784868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:09:01.784876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.784915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.784931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:09:01.784939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.784951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.784959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4', '__omit_place_holder__f9669b8aa5168a22f6dd824a5b79508378e10bb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:09:01.784967 | orchestrator | 2025-09-19 07:09:01.784974 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-19 07:09:01.784982 | orchestrator | Friday 19 September 2025 07:03:03 +0000 (0:00:03.581) 0:00:24.807 ****** 2025-09-19 07:09:01.784990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.785029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.785044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.785052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.785090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.785099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.785107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.785115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.786289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.786350 | orchestrator | 2025-09-19 07:09:01.786366 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-19 07:09:01.786379 | orchestrator | Friday 19 September 2025 07:03:07 +0000 (0:00:03.377) 0:00:28.185 ****** 2025-09-19 07:09:01.786391 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 07:09:01.786402 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 07:09:01.786413 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 07:09:01.786425 | orchestrator | 2025-09-19 07:09:01.786436 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-19 07:09:01.786448 | orchestrator | Friday 19 September 2025 07:03:09 +0000 (0:00:02.122) 0:00:30.307 ****** 2025-09-19 07:09:01.786459 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 07:09:01.786470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 07:09:01.786481 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 07:09:01.786492 | orchestrator | 2025-09-19 07:09:01.786503 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-19 07:09:01.786514 | orchestrator | Friday 19 September 2025 07:03:13 +0000 (0:00:04.704) 0:00:35.012 ****** 2025-09-19 07:09:01.786526 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.786537 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.786548 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.786559 | orchestrator | 2025-09-19 07:09:01.786588 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-19 07:09:01.786600 | orchestrator | Friday 19 September 2025 07:03:15 +0000 (0:00:02.073) 0:00:37.085 ****** 2025-09-19 07:09:01.786612 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 07:09:01.786624 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 07:09:01.786635 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 07:09:01.786646 | orchestrator | 2025-09-19 07:09:01.786658 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-19 07:09:01.786669 | orchestrator | Friday 19 September 2025 07:03:19 +0000 (0:00:03.778) 0:00:40.864 ****** 2025-09-19 07:09:01.786680 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 07:09:01.786691 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 07:09:01.786702 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 07:09:01.786727 | orchestrator | 2025-09-19 07:09:01.786739 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-19 07:09:01.786750 | orchestrator | Friday 19 September 2025 07:03:22 +0000 (0:00:02.498) 0:00:43.363 ****** 2025-09-19 07:09:01.786763 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-19 07:09:01.786782 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-19 07:09:01.786803 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-19 07:09:01.786823 | orchestrator | 2025-09-19 07:09:01.786843 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-19 07:09:01.786861 | orchestrator | Friday 19 September 2025 07:03:24 +0000 (0:00:02.216) 0:00:45.580 ****** 2025-09-19 07:09:01.786879 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-19 07:09:01.786896 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-19 07:09:01.786917 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-19 07:09:01.786936 | orchestrator | 2025-09-19 07:09:01.786956 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 07:09:01.786975 | orchestrator | Friday 19 September 2025 07:03:26 +0000 (0:00:01.925) 0:00:47.505 ****** 2025-09-19 07:09:01.786994 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.787015 | orchestrator | 2025-09-19 07:09:01.787037 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-19 07:09:01.787049 | orchestrator | Friday 19 September 2025 07:03:27 +0000 (0:00:00.946) 0:00:48.451 ****** 2025-09-19 07:09:01.787112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.787127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.787139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.787158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.787181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.787193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.787205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.787228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.787241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.787252 | orchestrator | 2025-09-19 07:09:01.787264 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-19 07:09:01.787275 | orchestrator | Friday 19 September 2025 07:03:30 +0000 (0:00:03.413) 0:00:51.865 ****** 2025-09-19 07:09:01.787288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.787306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.787319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.787330 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.787373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.787393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.787406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.787417 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.787429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.787452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.787464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.787476 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.787487 | orchestrator | 2025-09-19 07:09:01.787498 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-19 07:09:01.787510 | orchestrator | Friday 19 September 2025 07:03:31 +0000 (0:00:00.906) 0:00:52.772 ****** 2025-09-19 07:09:01.787521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.787533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.787554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.787566 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.787577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.787600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.787612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.787624 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.787636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.787647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.787680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.787692 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.787703 | orchestrator | 2025-09-19 07:09:01.787715 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 07:09:01.787726 | orchestrator | Friday 19 September 2025 07:03:33 +0000 (0:00:02.224) 0:00:54.997 ****** 2025-09-19 07:09:01.787738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.787761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.787774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.787785 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.787797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.787809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.787820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.787832 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.787851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.787871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.787887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.787899 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.787910 | orchestrator | 2025-09-19 07:09:01.787922 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 07:09:01.787933 | orchestrator | Friday 19 September 2025 07:03:34 +0000 (0:00:00.858) 0:00:55.856 ****** 2025-09-19 07:09:01.787945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.787957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.787968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.787980 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.787998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788052 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.788083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788118 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.788130 | orchestrator | 2025-09-19 07:09:01.788141 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 07:09:01.788152 | orchestrator | Friday 19 September 2025 07:03:35 +0000 (0:00:00.634) 0:00:56.491 ****** 2025-09-19 07:09:01.788172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788215 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.788231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788266 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.788277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788326 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.788337 | orchestrator | 2025-09-19 07:09:01.788348 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-19 07:09:01.788360 | orchestrator | Friday 19 September 2025 07:03:36 +0000 (0:00:01.197) 0:00:57.688 ****** 2025-09-19 07:09:01.788376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788411 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.788423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788473 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.788489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788525 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.788536 | orchestrator | 2025-09-19 07:09:01.788548 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-19 07:09:01.788559 | orchestrator | Friday 19 September 2025 07:03:37 +0000 (0:00:01.260) 0:00:58.949 ****** 2025-09-19 07:09:01.788571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788619 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.788631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788701 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.788722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788734 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.788745 | orchestrator | 2025-09-19 07:09:01.788756 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-19 07:09:01.788767 | orchestrator | Friday 19 September 2025 07:03:39 +0000 (0:00:01.633) 0:01:00.582 ****** 2025-09-19 07:09:01.788779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788819 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.788831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788877 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.788889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:09:01.788900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:09:01.788917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:09:01.788929 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.788941 | orchestrator | 2025-09-19 07:09:01.788952 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-19 07:09:01.788963 | orchestrator | Friday 19 September 2025 07:03:40 +0000 (0:00:01.067) 0:01:01.650 ****** 2025-09-19 07:09:01.788981 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 07:09:01.788992 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 07:09:01.789004 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 07:09:01.789015 | orchestrator | 2025-09-19 07:09:01.789026 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-19 07:09:01.789037 | orchestrator | Friday 19 September 2025 07:03:41 +0000 (0:00:01.325) 0:01:02.976 ****** 2025-09-19 07:09:01.789048 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 07:09:01.789079 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 07:09:01.789090 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 07:09:01.789101 | orchestrator | 2025-09-19 07:09:01.789113 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-19 07:09:01.789124 | orchestrator | Friday 19 September 2025 07:03:43 +0000 (0:00:01.426) 0:01:04.402 ****** 2025-09-19 07:09:01.789135 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:09:01.789146 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:09:01.789158 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:09:01.789169 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:09:01.789181 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.789192 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:09:01.789204 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.789215 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:09:01.789226 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.789237 | orchestrator | 2025-09-19 07:09:01.789249 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-19 07:09:01.789260 | orchestrator | Friday 19 September 2025 07:03:44 +0000 (0:00:00.804) 0:01:05.207 ****** 2025-09-19 07:09:01.789279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.789292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.789312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 07:09:01.789331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.789343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.789355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:09:01.789373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.789385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.789397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:09:01.789415 | orchestrator | 2025-09-19 07:09:01.789427 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-19 07:09:01.789443 | orchestrator | Friday 19 September 2025 07:03:46 +0000 (0:00:02.449) 0:01:07.657 ****** 2025-09-19 07:09:01.789454 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.789466 | orchestrator | 2025-09-19 07:09:01.789477 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-19 07:09:01.789488 | orchestrator | Friday 19 September 2025 07:03:47 +0000 (0:00:00.525) 0:01:08.182 ****** 2025-09-19 07:09:01.789501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 07:09:01.789514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.789526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 07:09:01.789580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.789592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 07:09:01.789633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.789645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789681 | orchestrator | 2025-09-19 07:09:01.789693 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-19 07:09:01.789704 | orchestrator | Friday 19 September 2025 07:03:51 +0000 (0:00:04.363) 0:01:12.546 ****** 2025-09-19 07:09:01.789716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 07:09:01.789729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 07:09:01.789740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.789758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.789777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789818 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.789830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789841 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.789853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 07:09:01.789871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.789890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.789919 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.789930 | orchestrator | 2025-09-19 07:09:01.789942 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-19 07:09:01.789953 | orchestrator | Friday 19 September 2025 07:03:52 +0000 (0:00:01.278) 0:01:13.824 ****** 2025-09-19 07:09:01.789965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:09:01.789979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:09:01.789990 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.790002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:09:01.790047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:09:01.790080 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.790092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:09:01.790104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:09:01.790115 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.790127 | orchestrator | 2025-09-19 07:09:01.790138 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-19 07:09:01.790150 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:01.444) 0:01:15.269 ****** 2025-09-19 07:09:01.790161 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.790172 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.790184 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.790195 | orchestrator | 2025-09-19 07:09:01.790206 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-19 07:09:01.790218 | orchestrator | Friday 19 September 2025 07:03:56 +0000 (0:00:02.421) 0:01:17.690 ****** 2025-09-19 07:09:01.790229 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.790241 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.790252 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.790271 | orchestrator | 2025-09-19 07:09:01.790283 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-19 07:09:01.790294 | orchestrator | Friday 19 September 2025 07:03:58 +0000 (0:00:02.421) 0:01:20.111 ****** 2025-09-19 07:09:01.790305 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.790316 | orchestrator | 2025-09-19 07:09:01.790328 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-19 07:09:01.790339 | orchestrator | Friday 19 September 2025 07:03:59 +0000 (0:00:00.938) 0:01:21.049 ****** 2025-09-19 07:09:01.790360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.790374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.790418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.790443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790492 | orchestrator | 2025-09-19 07:09:01.790503 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-19 07:09:01.790515 | orchestrator | Friday 19 September 2025 07:04:04 +0000 (0:00:04.588) 0:01:25.638 ****** 2025-09-19 07:09:01.790527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.790554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790596 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.790612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.790624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790653 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.790665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.790684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.790709 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.790721 | orchestrator | 2025-09-19 07:09:01.790732 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-19 07:09:01.790743 | orchestrator | Friday 19 September 2025 07:04:05 +0000 (0:00:00.700) 0:01:26.339 ****** 2025-09-19 07:09:01.790760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:09:01.790773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:09:01.790784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:09:01.790797 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.790808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:09:01.790820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:09:01.790839 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.790850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:09:01.790862 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.790873 | orchestrator | 2025-09-19 07:09:01.790885 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-19 07:09:01.790896 | orchestrator | Friday 19 September 2025 07:04:06 +0000 (0:00:01.740) 0:01:28.079 ****** 2025-09-19 07:09:01.790908 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.790919 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.790931 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.790942 | orchestrator | 2025-09-19 07:09:01.790953 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-19 07:09:01.790965 | orchestrator | Friday 19 September 2025 07:04:09 +0000 (0:00:02.089) 0:01:30.168 ****** 2025-09-19 07:09:01.790976 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.790987 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.790998 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.791010 | orchestrator | 2025-09-19 07:09:01.791021 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-19 07:09:01.791033 | orchestrator | Friday 19 September 2025 07:04:11 +0000 (0:00:02.328) 0:01:32.497 ****** 2025-09-19 07:09:01.791044 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.791117 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.791131 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.791142 | orchestrator | 2025-09-19 07:09:01.791154 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-19 07:09:01.791166 | orchestrator | Friday 19 September 2025 07:04:11 +0000 (0:00:00.573) 0:01:33.071 ****** 2025-09-19 07:09:01.791177 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.791188 | orchestrator | 2025-09-19 07:09:01.791199 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-19 07:09:01.791210 | orchestrator | Friday 19 September 2025 07:04:12 +0000 (0:00:00.624) 0:01:33.696 ****** 2025-09-19 07:09:01.791231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 07:09:01.791250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 07:09:01.791272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 07:09:01.791284 | orchestrator | 2025-09-19 07:09:01.791295 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-19 07:09:01.791305 | orchestrator | Friday 19 September 2025 07:04:14 +0000 (0:00:02.407) 0:01:36.103 ****** 2025-09-19 07:09:01.791315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 07:09:01.791326 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.791342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 07:09:01.791353 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.791364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 07:09:01.791374 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.791384 | orchestrator | 2025-09-19 07:09:01.791394 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-19 07:09:01.791411 | orchestrator | Friday 19 September 2025 07:04:17 +0000 (0:00:02.055) 0:01:38.158 ****** 2025-09-19 07:09:01.791426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:09:01.791439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:09:01.791450 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.791460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:09:01.791470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:09:01.791481 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.791491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:09:01.791502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:09:01.791512 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.791523 | orchestrator | 2025-09-19 07:09:01.791533 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-19 07:09:01.791543 | orchestrator | Friday 19 September 2025 07:04:18 +0000 (0:00:01.959) 0:01:40.118 ****** 2025-09-19 07:09:01.791558 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.791569 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.791579 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.791589 | orchestrator | 2025-09-19 07:09:01.791599 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-19 07:09:01.791609 | orchestrator | Friday 19 September 2025 07:04:19 +0000 (0:00:00.373) 0:01:40.491 ****** 2025-09-19 07:09:01.791619 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.791629 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.791639 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.791649 | orchestrator | 2025-09-19 07:09:01.791659 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-19 07:09:01.791675 | orchestrator | Friday 19 September 2025 07:04:20 +0000 (0:00:01.331) 0:01:41.823 ****** 2025-09-19 07:09:01.791685 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.791695 | orchestrator | 2025-09-19 07:09:01.791706 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-19 07:09:01.791716 | orchestrator | Friday 19 September 2025 07:04:21 +0000 (0:00:00.912) 0:01:42.735 ****** 2025-09-19 07:09:01.791732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.791743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.791800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.791854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791900 | orchestrator | 2025-09-19 07:09:01.791910 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-19 07:09:01.791921 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:03.956) 0:01:46.692 ****** 2025-09-19 07:09:01.791932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.791942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.791985 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.792000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.792011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.792032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792120 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.792130 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.792140 | orchestrator | 2025-09-19 07:09:01.792150 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-19 07:09:01.792161 | orchestrator | Friday 19 September 2025 07:04:26 +0000 (0:00:01.134) 0:01:47.827 ****** 2025-09-19 07:09:01.792172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:09:01.792188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:09:01.792198 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.792209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:09:01.792233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:09:01.792244 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.792255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:09:01.792265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:09:01.792275 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.792286 | orchestrator | 2025-09-19 07:09:01.792296 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-19 07:09:01.792306 | orchestrator | Friday 19 September 2025 07:04:28 +0000 (0:00:01.408) 0:01:49.235 ****** 2025-09-19 07:09:01.792316 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.792326 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.792336 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.792346 | orchestrator | 2025-09-19 07:09:01.792356 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-19 07:09:01.792367 | orchestrator | Friday 19 September 2025 07:04:29 +0000 (0:00:01.464) 0:01:50.699 ****** 2025-09-19 07:09:01.792377 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.792387 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.792397 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.792407 | orchestrator | 2025-09-19 07:09:01.792417 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-19 07:09:01.792427 | orchestrator | Friday 19 September 2025 07:04:31 +0000 (0:00:02.381) 0:01:53.081 ****** 2025-09-19 07:09:01.792442 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.792452 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.792462 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.792472 | orchestrator | 2025-09-19 07:09:01.792482 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-19 07:09:01.792492 | orchestrator | Friday 19 September 2025 07:04:32 +0000 (0:00:00.388) 0:01:53.470 ****** 2025-09-19 07:09:01.792502 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.792512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.792522 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.792532 | orchestrator | 2025-09-19 07:09:01.792542 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-19 07:09:01.792552 | orchestrator | Friday 19 September 2025 07:04:32 +0000 (0:00:00.566) 0:01:54.036 ****** 2025-09-19 07:09:01.792563 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.792573 | orchestrator | 2025-09-19 07:09:01.792583 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-19 07:09:01.792593 | orchestrator | Friday 19 September 2025 07:04:33 +0000 (0:00:00.904) 0:01:54.941 ****** 2025-09-19 07:09:01.792604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:09:01.792622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:09:01.792640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:09:01.792721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:09:01.792732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:09:01.792746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:09:01.792784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792884 | orchestrator | 2025-09-19 07:09:01.792895 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-19 07:09:01.792905 | orchestrator | Friday 19 September 2025 07:04:37 +0000 (0:00:03.778) 0:01:58.719 ****** 2025-09-19 07:09:01.792922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:09:01.792933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:09:01.792948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.792989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:09:01.793026 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.793041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:09:01.793073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793134 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.793150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:09:01.793166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:09:01.793177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.793242 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.793253 | orchestrator | 2025-09-19 07:09:01.793263 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-19 07:09:01.793277 | orchestrator | Friday 19 September 2025 07:04:38 +0000 (0:00:01.317) 0:02:00.037 ****** 2025-09-19 07:09:01.793289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:09:01.793299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:09:01.793310 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.793320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:09:01.793330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:09:01.793341 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.793351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:09:01.793361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:09:01.793371 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.793381 | orchestrator | 2025-09-19 07:09:01.793392 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-19 07:09:01.793402 | orchestrator | Friday 19 September 2025 07:04:40 +0000 (0:00:01.113) 0:02:01.151 ****** 2025-09-19 07:09:01.793412 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.793422 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.793432 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.793442 | orchestrator | 2025-09-19 07:09:01.793452 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-19 07:09:01.793462 | orchestrator | Friday 19 September 2025 07:04:41 +0000 (0:00:01.313) 0:02:02.465 ****** 2025-09-19 07:09:01.793472 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.793482 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.793492 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.793502 | orchestrator | 2025-09-19 07:09:01.793513 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-19 07:09:01.793523 | orchestrator | Friday 19 September 2025 07:04:43 +0000 (0:00:01.979) 0:02:04.444 ****** 2025-09-19 07:09:01.793533 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.793543 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.793553 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.793563 | orchestrator | 2025-09-19 07:09:01.793574 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-19 07:09:01.793584 | orchestrator | Friday 19 September 2025 07:04:43 +0000 (0:00:00.484) 0:02:04.928 ****** 2025-09-19 07:09:01.793594 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.793604 | orchestrator | 2025-09-19 07:09:01.793614 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-19 07:09:01.793635 | orchestrator | Friday 19 September 2025 07:04:44 +0000 (0:00:00.787) 0:02:05.716 ****** 2025-09-19 07:09:01.793653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:09:01.793666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.793687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:09:01.793710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.793729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:09:01.793755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.793767 | orchestrator | 2025-09-19 07:09:01.793778 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-19 07:09:01.793788 | orchestrator | Friday 19 September 2025 07:04:48 +0000 (0:00:03.948) 0:02:09.665 ****** 2025-09-19 07:09:01.793806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:09:01.793828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.793839 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.793856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:09:01.793876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.793888 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.793912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:09:01.793941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.793954 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.793964 | orchestrator | 2025-09-19 07:09:01.793975 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-19 07:09:01.793985 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:03.128) 0:02:12.793 ****** 2025-09-19 07:09:01.793996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:09:01.794007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:09:01.794091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:09:01.794115 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.794126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:09:01.794137 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.794161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:09:01.794173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:09:01.794184 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.794194 | orchestrator | 2025-09-19 07:09:01.794204 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-19 07:09:01.794215 | orchestrator | Friday 19 September 2025 07:04:54 +0000 (0:00:02.842) 0:02:15.636 ****** 2025-09-19 07:09:01.794225 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.794235 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.794245 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.794255 | orchestrator | 2025-09-19 07:09:01.794265 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-19 07:09:01.794276 | orchestrator | Friday 19 September 2025 07:04:55 +0000 (0:00:01.260) 0:02:16.897 ****** 2025-09-19 07:09:01.794286 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.794296 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.794306 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.794316 | orchestrator | 2025-09-19 07:09:01.794331 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-19 07:09:01.794342 | orchestrator | Friday 19 September 2025 07:04:57 +0000 (0:00:01.817) 0:02:18.715 ****** 2025-09-19 07:09:01.794351 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.794362 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.794372 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.794382 | orchestrator | 2025-09-19 07:09:01.794392 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-19 07:09:01.794402 | orchestrator | Friday 19 September 2025 07:04:57 +0000 (0:00:00.408) 0:02:19.123 ****** 2025-09-19 07:09:01.794412 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.794422 | orchestrator | 2025-09-19 07:09:01.794432 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-19 07:09:01.794443 | orchestrator | Friday 19 September 2025 07:04:58 +0000 (0:00:00.806) 0:02:19.930 ****** 2025-09-19 07:09:01.794453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:09:01.794471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:09:01.794492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:09:01.794503 | orchestrator | 2025-09-19 07:09:01.794513 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-19 07:09:01.794523 | orchestrator | Friday 19 September 2025 07:05:02 +0000 (0:00:03.736) 0:02:23.667 ****** 2025-09-19 07:09:01.794534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:09:01.794544 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.794559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:09:01.794571 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.794581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:09:01.794598 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.794608 | orchestrator | 2025-09-19 07:09:01.794619 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-19 07:09:01.794629 | orchestrator | Friday 19 September 2025 07:05:03 +0000 (0:00:00.729) 0:02:24.396 ****** 2025-09-19 07:09:01.794639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:09:01.794649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:09:01.794659 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.794669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:09:01.794680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:09:01.794690 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.794700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:09:01.794711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:09:01.794721 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.794731 | orchestrator | 2025-09-19 07:09:01.794741 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-19 07:09:01.794757 | orchestrator | Friday 19 September 2025 07:05:04 +0000 (0:00:00.754) 0:02:25.151 ****** 2025-09-19 07:09:01.794767 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.794777 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.794787 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.794797 | orchestrator | 2025-09-19 07:09:01.794807 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-19 07:09:01.794818 | orchestrator | Friday 19 September 2025 07:05:05 +0000 (0:00:01.433) 0:02:26.584 ****** 2025-09-19 07:09:01.794828 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.794838 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.794848 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.794858 | orchestrator | 2025-09-19 07:09:01.794868 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-19 07:09:01.794879 | orchestrator | Friday 19 September 2025 07:05:07 +0000 (0:00:02.049) 0:02:28.633 ****** 2025-09-19 07:09:01.794889 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.794899 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.794909 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.794919 | orchestrator | 2025-09-19 07:09:01.794929 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-19 07:09:01.794939 | orchestrator | Friday 19 September 2025 07:05:08 +0000 (0:00:00.548) 0:02:29.182 ****** 2025-09-19 07:09:01.794949 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.794959 | orchestrator | 2025-09-19 07:09:01.794969 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-19 07:09:01.794985 | orchestrator | Friday 19 September 2025 07:05:08 +0000 (0:00:00.898) 0:02:30.081 ****** 2025-09-19 07:09:01.795002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:09:01.795024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:09:01.795048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:09:01.795077 | orchestrator | 2025-09-19 07:09:01.795088 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-19 07:09:01.795099 | orchestrator | Friday 19 September 2025 07:05:12 +0000 (0:00:03.874) 0:02:33.955 ****** 2025-09-19 07:09:01.795123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:09:01.795142 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.795153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:09:01.795165 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.795192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:09:01.795211 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.795222 | orchestrator | 2025-09-19 07:09:01.795233 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-19 07:09:01.795244 | orchestrator | Friday 19 September 2025 07:05:13 +0000 (0:00:01.005) 0:02:34.960 ****** 2025-09-19 07:09:01.795254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:09:01.795265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:09:01.795277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:09:01.795288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:09:01.795300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 07:09:01.795310 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.795326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:09:01.795337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:09:01.795354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:09:01.795365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:09:01.795376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:09:01.795391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:09:01.795401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:09:01.795412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 07:09:01.795423 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.795433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:09:01.795444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 07:09:01.795454 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.795465 | orchestrator | 2025-09-19 07:09:01.795475 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-19 07:09:01.795486 | orchestrator | Friday 19 September 2025 07:05:14 +0000 (0:00:01.003) 0:02:35.963 ****** 2025-09-19 07:09:01.795496 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.795506 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.795516 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.795526 | orchestrator | 2025-09-19 07:09:01.795536 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-19 07:09:01.795546 | orchestrator | Friday 19 September 2025 07:05:16 +0000 (0:00:01.330) 0:02:37.294 ****** 2025-09-19 07:09:01.795556 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.795567 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.795577 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.795587 | orchestrator | 2025-09-19 07:09:01.795597 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-19 07:09:01.795607 | orchestrator | Friday 19 September 2025 07:05:18 +0000 (0:00:02.303) 0:02:39.597 ****** 2025-09-19 07:09:01.795617 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.795633 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.795643 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.795653 | orchestrator | 2025-09-19 07:09:01.795663 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-19 07:09:01.795674 | orchestrator | Friday 19 September 2025 07:05:18 +0000 (0:00:00.530) 0:02:40.127 ****** 2025-09-19 07:09:01.795684 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.795694 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.795704 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.795714 | orchestrator | 2025-09-19 07:09:01.795724 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-19 07:09:01.795735 | orchestrator | Friday 19 September 2025 07:05:19 +0000 (0:00:00.316) 0:02:40.444 ****** 2025-09-19 07:09:01.795750 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.795761 | orchestrator | 2025-09-19 07:09:01.795771 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-19 07:09:01.795781 | orchestrator | Friday 19 September 2025 07:05:20 +0000 (0:00:01.157) 0:02:41.602 ****** 2025-09-19 07:09:01.795793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:09:01.795809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:09:01.795821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:09:01.795833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:09:01.795861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:09:01.795873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:09:01.795888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:09:01.795900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:09:01.795911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:09:01.795928 | orchestrator | 2025-09-19 07:09:01.795939 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-19 07:09:01.795949 | orchestrator | Friday 19 September 2025 07:05:24 +0000 (0:00:04.277) 0:02:45.879 ****** 2025-09-19 07:09:01.795965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:09:01.795978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:09:01.795988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:09:01.795999 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.796014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:09:01.796026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:09:01.796044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:09:01.796054 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.796086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:09:01.796097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:09:01.796113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:09:01.796124 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.796135 | orchestrator | 2025-09-19 07:09:01.796145 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-19 07:09:01.796156 | orchestrator | Friday 19 September 2025 07:05:25 +0000 (0:00:00.741) 0:02:46.620 ****** 2025-09-19 07:09:01.796166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:09:01.796184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:09:01.796194 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.796205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:09:01.796215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:09:01.796226 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.796236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:09:01.796247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:09:01.796257 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.796267 | orchestrator | 2025-09-19 07:09:01.796278 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-19 07:09:01.796288 | orchestrator | Friday 19 September 2025 07:05:26 +0000 (0:00:00.847) 0:02:47.468 ****** 2025-09-19 07:09:01.796298 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.796308 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.796324 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.796334 | orchestrator | 2025-09-19 07:09:01.796344 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-19 07:09:01.796354 | orchestrator | Friday 19 September 2025 07:05:28 +0000 (0:00:01.690) 0:02:49.158 ****** 2025-09-19 07:09:01.796364 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.796374 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.796385 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.796395 | orchestrator | 2025-09-19 07:09:01.796405 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-19 07:09:01.796416 | orchestrator | Friday 19 September 2025 07:05:30 +0000 (0:00:02.022) 0:02:51.181 ****** 2025-09-19 07:09:01.796426 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.796436 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.796446 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.796456 | orchestrator | 2025-09-19 07:09:01.796466 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-19 07:09:01.796476 | orchestrator | Friday 19 September 2025 07:05:30 +0000 (0:00:00.369) 0:02:51.550 ****** 2025-09-19 07:09:01.796486 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.796496 | orchestrator | 2025-09-19 07:09:01.796506 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-19 07:09:01.796516 | orchestrator | Friday 19 September 2025 07:05:31 +0000 (0:00:01.010) 0:02:52.561 ****** 2025-09-19 07:09:01.796531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:09:01.796549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.796560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:09:01.796578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:09:01.796589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.796607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.796624 | orchestrator | 2025-09-19 07:09:01.796635 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-19 07:09:01.796645 | orchestrator | Friday 19 September 2025 07:05:34 +0000 (0:00:03.492) 0:02:56.054 ****** 2025-09-19 07:09:01.796656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:09:01.796666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.796676 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.796693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:09:01.796704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.796720 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.796735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:09:01.796746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.796756 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.796767 | orchestrator | 2025-09-19 07:09:01.796777 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-19 07:09:01.796787 | orchestrator | Friday 19 September 2025 07:05:35 +0000 (0:00:00.690) 0:02:56.745 ****** 2025-09-19 07:09:01.796797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:09:01.796808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:09:01.796819 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.796829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:09:01.796839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:09:01.796855 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.796865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:09:01.796876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:09:01.796886 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.796896 | orchestrator | 2025-09-19 07:09:01.796912 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-19 07:09:01.796922 | orchestrator | Friday 19 September 2025 07:05:36 +0000 (0:00:00.940) 0:02:57.685 ****** 2025-09-19 07:09:01.796932 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.796944 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.796960 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.796977 | orchestrator | 2025-09-19 07:09:01.796992 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-19 07:09:01.797007 | orchestrator | Friday 19 September 2025 07:05:38 +0000 (0:00:01.612) 0:02:59.298 ****** 2025-09-19 07:09:01.797023 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.797038 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.797053 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.797127 | orchestrator | 2025-09-19 07:09:01.797144 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-19 07:09:01.797160 | orchestrator | Friday 19 September 2025 07:05:40 +0000 (0:00:02.107) 0:03:01.405 ****** 2025-09-19 07:09:01.797177 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.797193 | orchestrator | 2025-09-19 07:09:01.797208 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-19 07:09:01.797218 | orchestrator | Friday 19 September 2025 07:05:41 +0000 (0:00:01.077) 0:03:02.483 ****** 2025-09-19 07:09:01.797235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 07:09:01.797247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 07:09:01.797297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 07:09:01.797361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797396 | orchestrator | 2025-09-19 07:09:01.797405 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-19 07:09:01.797413 | orchestrator | Friday 19 September 2025 07:05:44 +0000 (0:00:03.527) 0:03:06.011 ****** 2025-09-19 07:09:01.797422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 07:09:01.797431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797645 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.797654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 07:09:01.797670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 07:09:01.797679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797736 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.797748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.797757 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.797765 | orchestrator | 2025-09-19 07:09:01.797774 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-19 07:09:01.797782 | orchestrator | Friday 19 September 2025 07:05:45 +0000 (0:00:01.040) 0:03:07.051 ****** 2025-09-19 07:09:01.797790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:09:01.797800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:09:01.797808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:09:01.797816 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.797825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:09:01.797833 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.797841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:09:01.797855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:09:01.797863 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.797871 | orchestrator | 2025-09-19 07:09:01.797879 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-19 07:09:01.797887 | orchestrator | Friday 19 September 2025 07:05:46 +0000 (0:00:00.917) 0:03:07.968 ****** 2025-09-19 07:09:01.797895 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.797904 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.797912 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.797920 | orchestrator | 2025-09-19 07:09:01.797928 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-19 07:09:01.797936 | orchestrator | Friday 19 September 2025 07:05:48 +0000 (0:00:01.201) 0:03:09.169 ****** 2025-09-19 07:09:01.797944 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.797952 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.797961 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.797969 | orchestrator | 2025-09-19 07:09:01.797977 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-19 07:09:01.797986 | orchestrator | Friday 19 September 2025 07:05:50 +0000 (0:00:02.135) 0:03:11.304 ****** 2025-09-19 07:09:01.797998 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.798006 | orchestrator | 2025-09-19 07:09:01.798015 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-19 07:09:01.798049 | orchestrator | Friday 19 September 2025 07:05:51 +0000 (0:00:01.328) 0:03:12.633 ****** 2025-09-19 07:09:01.798103 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:09:01.798113 | orchestrator | 2025-09-19 07:09:01.798121 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-19 07:09:01.798130 | orchestrator | Friday 19 September 2025 07:05:54 +0000 (0:00:02.725) 0:03:15.358 ****** 2025-09-19 07:09:01.798148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:09:01.798165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:09:01.798174 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.798189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:09:01.798198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:09:01.798205 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.798217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:09:01.798231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:09:01.798241 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.798249 | orchestrator | 2025-09-19 07:09:01.798257 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-19 07:09:01.798265 | orchestrator | Friday 19 September 2025 07:05:56 +0000 (0:00:02.153) 0:03:17.512 ****** 2025-09-19 07:09:01.798283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:09:01.798293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:09:01.798307 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.798319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:09:01.798329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:09:01.798338 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.798346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:09:01.798360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:09:01.798369 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.798378 | orchestrator | 2025-09-19 07:09:01.798386 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-19 07:09:01.798394 | orchestrator | Friday 19 September 2025 07:05:58 +0000 (0:00:02.104) 0:03:19.616 ****** 2025-09-19 07:09:01.798402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:09:01.798415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:09:01.798424 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.798449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:09:01.798461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:09:01.798474 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.798482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:09:01.798491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:09:01.798499 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.798508 | orchestrator | 2025-09-19 07:09:01.798515 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-19 07:09:01.798524 | orchestrator | Friday 19 September 2025 07:06:00 +0000 (0:00:02.334) 0:03:21.951 ****** 2025-09-19 07:09:01.798531 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.798540 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.798548 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.798556 | orchestrator | 2025-09-19 07:09:01.798565 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-19 07:09:01.798573 | orchestrator | Friday 19 September 2025 07:06:02 +0000 (0:00:02.096) 0:03:24.047 ****** 2025-09-19 07:09:01.798581 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.798588 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.798595 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.798602 | orchestrator | 2025-09-19 07:09:01.798609 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-19 07:09:01.798616 | orchestrator | Friday 19 September 2025 07:06:04 +0000 (0:00:01.564) 0:03:25.611 ****** 2025-09-19 07:09:01.798623 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.798630 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.798637 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.798644 | orchestrator | 2025-09-19 07:09:01.798651 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-19 07:09:01.798658 | orchestrator | Friday 19 September 2025 07:06:05 +0000 (0:00:00.555) 0:03:26.167 ****** 2025-09-19 07:09:01.798665 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.798672 | orchestrator | 2025-09-19 07:09:01.798682 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-19 07:09:01.798689 | orchestrator | Friday 19 September 2025 07:06:06 +0000 (0:00:01.129) 0:03:27.297 ****** 2025-09-19 07:09:01.798697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 07:09:01.798713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 07:09:01.798721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 07:09:01.798728 | orchestrator | 2025-09-19 07:09:01.798735 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-19 07:09:01.798742 | orchestrator | Friday 19 September 2025 07:06:07 +0000 (0:00:01.423) 0:03:28.720 ****** 2025-09-19 07:09:01.798749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 07:09:01.798760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 07:09:01.798768 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.798775 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.798782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 07:09:01.798793 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.798800 | orchestrator | 2025-09-19 07:09:01.798807 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-19 07:09:01.798814 | orchestrator | Friday 19 September 2025 07:06:08 +0000 (0:00:00.729) 0:03:29.449 ****** 2025-09-19 07:09:01.798821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 07:09:01.798828 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.798839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 07:09:01.798846 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.798853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 07:09:01.798860 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.798867 | orchestrator | 2025-09-19 07:09:01.798874 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-19 07:09:01.798881 | orchestrator | Friday 19 September 2025 07:06:08 +0000 (0:00:00.696) 0:03:30.146 ****** 2025-09-19 07:09:01.798888 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.798895 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.798902 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.798908 | orchestrator | 2025-09-19 07:09:01.798915 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-19 07:09:01.798922 | orchestrator | Friday 19 September 2025 07:06:09 +0000 (0:00:00.446) 0:03:30.592 ****** 2025-09-19 07:09:01.798929 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.798936 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.798943 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.798950 | orchestrator | 2025-09-19 07:09:01.798957 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-19 07:09:01.798963 | orchestrator | Friday 19 September 2025 07:06:10 +0000 (0:00:01.456) 0:03:32.049 ****** 2025-09-19 07:09:01.798970 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.798977 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.798984 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.798991 | orchestrator | 2025-09-19 07:09:01.798998 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-19 07:09:01.799005 | orchestrator | Friday 19 September 2025 07:06:11 +0000 (0:00:00.581) 0:03:32.630 ****** 2025-09-19 07:09:01.799012 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.799019 | orchestrator | 2025-09-19 07:09:01.799026 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-19 07:09:01.799033 | orchestrator | Friday 19 September 2025 07:06:12 +0000 (0:00:01.202) 0:03:33.832 ****** 2025-09-19 07:09:01.799047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:09:01.799068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:09:01.799090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:09:01.799139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:09:01.799170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:09:01.799211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.799242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.799285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:09:01.799358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.799400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.799408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.799415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.799434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.799605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.799683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.799694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799706 | orchestrator | 2025-09-19 07:09:01.799713 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-19 07:09:01.799721 | orchestrator | Friday 19 September 2025 07:06:17 +0000 (0:00:04.722) 0:03:38.555 ****** 2025-09-19 07:09:01.799728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:09:01.799736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:09:01.799822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.799909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:09:01.799930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.799986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.799997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.800029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.800138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:09:01.800151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:09:01.800172 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.800180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.800271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.800279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:09:01.800345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.800355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:09:01.800387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.800395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.800402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.800453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.800490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.800498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.800550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:09:01.800587 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.800594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:09:01.800601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:09:01.800634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:09:01.800647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.800654 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.800660 | orchestrator | 2025-09-19 07:09:01.800667 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-19 07:09:01.800674 | orchestrator | Friday 19 September 2025 07:06:19 +0000 (0:00:01.971) 0:03:40.527 ****** 2025-09-19 07:09:01.800680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:09:01.800690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:09:01.800698 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.800705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:09:01.800712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:09:01.800719 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.800726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:09:01.800732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:09:01.800739 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.800745 | orchestrator | 2025-09-19 07:09:01.800752 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-19 07:09:01.800758 | orchestrator | Friday 19 September 2025 07:06:21 +0000 (0:00:01.873) 0:03:42.400 ****** 2025-09-19 07:09:01.800765 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.800771 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.800778 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.800784 | orchestrator | 2025-09-19 07:09:01.800791 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-19 07:09:01.800797 | orchestrator | Friday 19 September 2025 07:06:23 +0000 (0:00:01.950) 0:03:44.351 ****** 2025-09-19 07:09:01.800804 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.800810 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.800817 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.800823 | orchestrator | 2025-09-19 07:09:01.800829 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-19 07:09:01.800836 | orchestrator | Friday 19 September 2025 07:06:25 +0000 (0:00:01.996) 0:03:46.347 ****** 2025-09-19 07:09:01.800842 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.800855 | orchestrator | 2025-09-19 07:09:01.800875 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-19 07:09:01.800882 | orchestrator | Friday 19 September 2025 07:06:26 +0000 (0:00:01.233) 0:03:47.581 ****** 2025-09-19 07:09:01.800916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.800925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.800937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.800944 | orchestrator | 2025-09-19 07:09:01.800951 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-19 07:09:01.800958 | orchestrator | Friday 19 September 2025 07:06:29 +0000 (0:00:03.510) 0:03:51.091 ****** 2025-09-19 07:09:01.800965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.800978 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.801003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.801012 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.801018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.801025 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.801032 | orchestrator | 2025-09-19 07:09:01.801039 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-19 07:09:01.801045 | orchestrator | Friday 19 September 2025 07:06:30 +0000 (0:00:00.888) 0:03:51.980 ****** 2025-09-19 07:09:01.801071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801087 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.801094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801107 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.801114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801135 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.801141 | orchestrator | 2025-09-19 07:09:01.801148 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-19 07:09:01.801154 | orchestrator | Friday 19 September 2025 07:06:31 +0000 (0:00:00.717) 0:03:52.697 ****** 2025-09-19 07:09:01.801160 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.801167 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.801173 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.801180 | orchestrator | 2025-09-19 07:09:01.801188 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-19 07:09:01.801196 | orchestrator | Friday 19 September 2025 07:06:32 +0000 (0:00:01.163) 0:03:53.861 ****** 2025-09-19 07:09:01.801204 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.801212 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.801219 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.801226 | orchestrator | 2025-09-19 07:09:01.801234 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-19 07:09:01.801241 | orchestrator | Friday 19 September 2025 07:06:34 +0000 (0:00:01.735) 0:03:55.597 ****** 2025-09-19 07:09:01.801249 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.801257 | orchestrator | 2025-09-19 07:09:01.801265 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-19 07:09:01.801272 | orchestrator | Friday 19 September 2025 07:06:35 +0000 (0:00:01.287) 0:03:56.884 ****** 2025-09-19 07:09:01.801304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.801314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.801351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.801401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801422 | orchestrator | 2025-09-19 07:09:01.801430 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-19 07:09:01.801438 | orchestrator | Friday 19 September 2025 07:06:40 +0000 (0:00:04.396) 0:04:01.281 ****** 2025-09-19 07:09:01.801464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.801474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801495 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.801502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.801515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801528 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.801553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.801565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.801584 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.801591 | orchestrator | 2025-09-19 07:09:01.801597 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-19 07:09:01.801604 | orchestrator | Friday 19 September 2025 07:06:40 +0000 (0:00:00.673) 0:04:01.954 ****** 2025-09-19 07:09:01.801611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801678 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.801685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801691 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.801698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:09:01.801730 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.801736 | orchestrator | 2025-09-19 07:09:01.801743 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-19 07:09:01.801749 | orchestrator | Friday 19 September 2025 07:06:42 +0000 (0:00:01.637) 0:04:03.592 ****** 2025-09-19 07:09:01.801756 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.801762 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.801768 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.801775 | orchestrator | 2025-09-19 07:09:01.801781 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-19 07:09:01.801788 | orchestrator | Friday 19 September 2025 07:06:43 +0000 (0:00:01.528) 0:04:05.121 ****** 2025-09-19 07:09:01.801794 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.801801 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.801807 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.801813 | orchestrator | 2025-09-19 07:09:01.801820 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-19 07:09:01.801826 | orchestrator | Friday 19 September 2025 07:06:45 +0000 (0:00:02.005) 0:04:07.127 ****** 2025-09-19 07:09:01.801833 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.801839 | orchestrator | 2025-09-19 07:09:01.801846 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-19 07:09:01.801852 | orchestrator | Friday 19 September 2025 07:06:47 +0000 (0:00:01.588) 0:04:08.715 ****** 2025-09-19 07:09:01.801859 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-19 07:09:01.801866 | orchestrator | 2025-09-19 07:09:01.801872 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-19 07:09:01.801878 | orchestrator | Friday 19 September 2025 07:06:48 +0000 (0:00:00.858) 0:04:09.574 ****** 2025-09-19 07:09:01.801956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 07:09:01.801978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 07:09:01.801985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 07:09:01.801991 | orchestrator | 2025-09-19 07:09:01.802043 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-19 07:09:01.802073 | orchestrator | Friday 19 September 2025 07:06:52 +0000 (0:00:04.062) 0:04:13.637 ****** 2025-09-19 07:09:01.802081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:09:01.802088 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.802095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:09:01.802102 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.802112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:09:01.802119 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.802125 | orchestrator | 2025-09-19 07:09:01.802132 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-19 07:09:01.802138 | orchestrator | Friday 19 September 2025 07:06:53 +0000 (0:00:01.466) 0:04:15.103 ****** 2025-09-19 07:09:01.802145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:09:01.802152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:09:01.802159 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.802166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:09:01.802172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:09:01.802179 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.802185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:09:01.802192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:09:01.802198 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.802209 | orchestrator | 2025-09-19 07:09:01.802215 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 07:09:01.802222 | orchestrator | Friday 19 September 2025 07:06:55 +0000 (0:00:01.568) 0:04:16.671 ****** 2025-09-19 07:09:01.802228 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.802235 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.802241 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.802247 | orchestrator | 2025-09-19 07:09:01.802254 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 07:09:01.802260 | orchestrator | Friday 19 September 2025 07:06:57 +0000 (0:00:02.399) 0:04:19.071 ****** 2025-09-19 07:09:01.802267 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.802273 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.802300 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.802307 | orchestrator | 2025-09-19 07:09:01.802314 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-19 07:09:01.802320 | orchestrator | Friday 19 September 2025 07:07:00 +0000 (0:00:02.800) 0:04:21.872 ****** 2025-09-19 07:09:01.802326 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-19 07:09:01.802333 | orchestrator | 2025-09-19 07:09:01.802340 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-19 07:09:01.802346 | orchestrator | Friday 19 September 2025 07:07:02 +0000 (0:00:01.423) 0:04:23.295 ****** 2025-09-19 07:09:01.802353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:09:01.802360 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.802370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:09:01.802377 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.802383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:09:01.802390 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.802396 | orchestrator | 2025-09-19 07:09:01.802403 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-19 07:09:01.802409 | orchestrator | Friday 19 September 2025 07:07:03 +0000 (0:00:01.235) 0:04:24.531 ****** 2025-09-19 07:09:01.802416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:09:01.802427 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.802434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:09:01.802441 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.802447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:09:01.802470 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.802478 | orchestrator | 2025-09-19 07:09:01.802484 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-19 07:09:01.802491 | orchestrator | Friday 19 September 2025 07:07:04 +0000 (0:00:01.332) 0:04:25.864 ****** 2025-09-19 07:09:01.802497 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.802503 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.802510 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.802516 | orchestrator | 2025-09-19 07:09:01.802522 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 07:09:01.802529 | orchestrator | Friday 19 September 2025 07:07:06 +0000 (0:00:01.789) 0:04:27.654 ****** 2025-09-19 07:09:01.802535 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.802542 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.802548 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.802555 | orchestrator | 2025-09-19 07:09:01.802561 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 07:09:01.802568 | orchestrator | Friday 19 September 2025 07:07:08 +0000 (0:00:02.393) 0:04:30.048 ****** 2025-09-19 07:09:01.802574 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.802580 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.802587 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.802593 | orchestrator | 2025-09-19 07:09:01.802600 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-19 07:09:01.802607 | orchestrator | Friday 19 September 2025 07:07:12 +0000 (0:00:03.127) 0:04:33.175 ****** 2025-09-19 07:09:01.802613 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-19 07:09:01.802620 | orchestrator | 2025-09-19 07:09:01.802626 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-19 07:09:01.802633 | orchestrator | Friday 19 September 2025 07:07:12 +0000 (0:00:00.851) 0:04:34.027 ****** 2025-09-19 07:09:01.802643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:09:01.802654 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.802661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:09:01.802667 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.802674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:09:01.802681 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.802687 | orchestrator | 2025-09-19 07:09:01.802693 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-19 07:09:01.802700 | orchestrator | Friday 19 September 2025 07:07:14 +0000 (0:00:01.291) 0:04:35.319 ****** 2025-09-19 07:09:01.802706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:09:01.802713 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.802736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:09:01.802744 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.802751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:09:01.802757 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.802764 | orchestrator | 2025-09-19 07:09:01.802770 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-19 07:09:01.802776 | orchestrator | Friday 19 September 2025 07:07:15 +0000 (0:00:01.349) 0:04:36.668 ****** 2025-09-19 07:09:01.802783 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.802789 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.802796 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.802802 | orchestrator | 2025-09-19 07:09:01.802808 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 07:09:01.802819 | orchestrator | Friday 19 September 2025 07:07:16 +0000 (0:00:01.409) 0:04:38.078 ****** 2025-09-19 07:09:01.802825 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.802832 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.802838 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.802844 | orchestrator | 2025-09-19 07:09:01.802851 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 07:09:01.802861 | orchestrator | Friday 19 September 2025 07:07:19 +0000 (0:00:02.473) 0:04:40.551 ****** 2025-09-19 07:09:01.802867 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.802874 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.802880 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.802886 | orchestrator | 2025-09-19 07:09:01.802893 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-19 07:09:01.802900 | orchestrator | Friday 19 September 2025 07:07:22 +0000 (0:00:02.971) 0:04:43.523 ****** 2025-09-19 07:09:01.802906 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.802912 | orchestrator | 2025-09-19 07:09:01.802919 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-19 07:09:01.802925 | orchestrator | Friday 19 September 2025 07:07:23 +0000 (0:00:01.578) 0:04:45.101 ****** 2025-09-19 07:09:01.802932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.802940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:09:01.802964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.802972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.802985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.802995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.803002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:09:01.803009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.803016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.803040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.803053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.803104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:09:01.803112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.803118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.803125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.803132 | orchestrator | 2025-09-19 07:09:01.803139 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-19 07:09:01.803145 | orchestrator | Friday 19 September 2025 07:07:27 +0000 (0:00:03.555) 0:04:48.657 ****** 2025-09-19 07:09:01.803173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.803186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:09:01.803196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.803202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.803208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.803213 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.803235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.803247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:09:01.803253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.803263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.803269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.803274 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.803280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.803287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:09:01.803312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.803319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:09:01.803328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:09:01.803334 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.803339 | orchestrator | 2025-09-19 07:09:01.803345 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-19 07:09:01.803351 | orchestrator | Friday 19 September 2025 07:07:28 +0000 (0:00:01.083) 0:04:49.740 ****** 2025-09-19 07:09:01.803357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:09:01.803363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:09:01.803369 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.803374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:09:01.803380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:09:01.803386 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.803392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:09:01.803397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:09:01.803407 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.803413 | orchestrator | 2025-09-19 07:09:01.803418 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-19 07:09:01.803424 | orchestrator | Friday 19 September 2025 07:07:29 +0000 (0:00:01.278) 0:04:51.018 ****** 2025-09-19 07:09:01.803430 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.803435 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.803441 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.803446 | orchestrator | 2025-09-19 07:09:01.803452 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-19 07:09:01.803457 | orchestrator | Friday 19 September 2025 07:07:31 +0000 (0:00:01.344) 0:04:52.363 ****** 2025-09-19 07:09:01.803463 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.803469 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.803474 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.803480 | orchestrator | 2025-09-19 07:09:01.803501 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-19 07:09:01.803508 | orchestrator | Friday 19 September 2025 07:07:33 +0000 (0:00:02.084) 0:04:54.447 ****** 2025-09-19 07:09:01.803513 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.803519 | orchestrator | 2025-09-19 07:09:01.803525 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-19 07:09:01.803530 | orchestrator | Friday 19 September 2025 07:07:34 +0000 (0:00:01.673) 0:04:56.121 ****** 2025-09-19 07:09:01.803536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:09:01.803546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:09:01.803552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:09:01.803562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:09:01.803585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:09:01.803595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:09:01.803601 | orchestrator | 2025-09-19 07:09:01.803607 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-19 07:09:01.803613 | orchestrator | Friday 19 September 2025 07:07:40 +0000 (0:00:05.158) 0:05:01.279 ****** 2025-09-19 07:09:01.803619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:09:01.803644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:09:01.803651 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.803657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:09:01.803666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:09:01.803673 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.803679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:09:01.803689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:09:01.803695 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.803701 | orchestrator | 2025-09-19 07:09:01.803706 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-19 07:09:01.803727 | orchestrator | Friday 19 September 2025 07:07:40 +0000 (0:00:00.788) 0:05:02.067 ****** 2025-09-19 07:09:01.803733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 07:09:01.803739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 07:09:01.803745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:09:01.803755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:09:01.803761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:09:01.803767 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.803776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:09:01.803782 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.803788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 07:09:01.803793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:09:01.803803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:09:01.803809 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.803815 | orchestrator | 2025-09-19 07:09:01.803820 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-19 07:09:01.803826 | orchestrator | Friday 19 September 2025 07:07:42 +0000 (0:00:01.608) 0:05:03.676 ****** 2025-09-19 07:09:01.803832 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.803837 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.803846 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.803851 | orchestrator | 2025-09-19 07:09:01.803857 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-19 07:09:01.803863 | orchestrator | Friday 19 September 2025 07:07:42 +0000 (0:00:00.431) 0:05:04.108 ****** 2025-09-19 07:09:01.803868 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.803874 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.803879 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.803885 | orchestrator | 2025-09-19 07:09:01.803891 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-19 07:09:01.803896 | orchestrator | Friday 19 September 2025 07:07:44 +0000 (0:00:01.305) 0:05:05.413 ****** 2025-09-19 07:09:01.803902 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.803907 | orchestrator | 2025-09-19 07:09:01.803913 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-19 07:09:01.803919 | orchestrator | Friday 19 September 2025 07:07:45 +0000 (0:00:01.691) 0:05:07.105 ****** 2025-09-19 07:09:01.803925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:09:01.803948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:09:01.803955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:09:01.803970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:09:01.803976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.803982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.803988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.803994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:09:01.804041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:09:01.804047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:09:01.804104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:09:01.804117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:09:01.804124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:09:01.804130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:09:01.804187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:09:01.804193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804217 | orchestrator | 2025-09-19 07:09:01.804222 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-19 07:09:01.804228 | orchestrator | Friday 19 September 2025 07:07:50 +0000 (0:00:04.098) 0:05:11.204 ****** 2025-09-19 07:09:01.804234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 07:09:01.804240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:09:01.804246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 07:09:01.804280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:09:01.804286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 07:09:01.804312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:09:01.804318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804326 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.804332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 07:09:01.804359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 07:09:01.804369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:09:01.804380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:09:01.804386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804430 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.804439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 07:09:01.804445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:09:01.804451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:09:01.804470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:09:01.804476 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.804482 | orchestrator | 2025-09-19 07:09:01.804487 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-19 07:09:01.804493 | orchestrator | Friday 19 September 2025 07:07:50 +0000 (0:00:00.839) 0:05:12.044 ****** 2025-09-19 07:09:01.804499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 07:09:01.804505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 07:09:01.804513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:09:01.804520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 07:09:01.804526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:09:01.804532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 07:09:01.804538 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.804543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:09:01.804550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:09:01.804555 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.804561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 07:09:01.804571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 07:09:01.804577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:09:01.804583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:09:01.804589 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.804594 | orchestrator | 2025-09-19 07:09:01.804603 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-19 07:09:01.804609 | orchestrator | Friday 19 September 2025 07:07:52 +0000 (0:00:01.256) 0:05:13.300 ****** 2025-09-19 07:09:01.804615 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.804620 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.804626 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.804632 | orchestrator | 2025-09-19 07:09:01.804637 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-19 07:09:01.804643 | orchestrator | Friday 19 September 2025 07:07:52 +0000 (0:00:00.480) 0:05:13.780 ****** 2025-09-19 07:09:01.804648 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.804654 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.804659 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.804665 | orchestrator | 2025-09-19 07:09:01.804671 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-19 07:09:01.804676 | orchestrator | Friday 19 September 2025 07:07:53 +0000 (0:00:01.300) 0:05:15.081 ****** 2025-09-19 07:09:01.804682 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.804687 | orchestrator | 2025-09-19 07:09:01.804693 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-19 07:09:01.804699 | orchestrator | Friday 19 September 2025 07:07:55 +0000 (0:00:01.417) 0:05:16.498 ****** 2025-09-19 07:09:01.804707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:09:01.804714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:09:01.804725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:09:01.804731 | orchestrator | 2025-09-19 07:09:01.804737 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-19 07:09:01.804743 | orchestrator | Friday 19 September 2025 07:07:57 +0000 (0:00:02.599) 0:05:19.098 ****** 2025-09-19 07:09:01.804752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 07:09:01.804758 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.804767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 07:09:01.804773 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.804779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 07:09:01.804789 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.804795 | orchestrator | 2025-09-19 07:09:01.804800 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-19 07:09:01.804806 | orchestrator | Friday 19 September 2025 07:07:58 +0000 (0:00:00.400) 0:05:19.498 ****** 2025-09-19 07:09:01.804812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 07:09:01.804817 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.804823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 07:09:01.804829 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.804834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 07:09:01.804840 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.804845 | orchestrator | 2025-09-19 07:09:01.804851 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-19 07:09:01.804857 | orchestrator | Friday 19 September 2025 07:07:58 +0000 (0:00:00.613) 0:05:20.112 ****** 2025-09-19 07:09:01.804862 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.804871 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.804876 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.804882 | orchestrator | 2025-09-19 07:09:01.804888 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-19 07:09:01.804893 | orchestrator | Friday 19 September 2025 07:07:59 +0000 (0:00:00.819) 0:05:20.931 ****** 2025-09-19 07:09:01.804899 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.804904 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.804910 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.804915 | orchestrator | 2025-09-19 07:09:01.804921 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-19 07:09:01.804927 | orchestrator | Friday 19 September 2025 07:08:01 +0000 (0:00:01.361) 0:05:22.293 ****** 2025-09-19 07:09:01.804932 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:09:01.804938 | orchestrator | 2025-09-19 07:09:01.804943 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-19 07:09:01.804949 | orchestrator | Friday 19 September 2025 07:08:02 +0000 (0:00:01.504) 0:05:23.798 ****** 2025-09-19 07:09:01.804957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.804968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.804974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.804983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.804990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.805002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 07:09:01.805008 | orchestrator | 2025-09-19 07:09:01.805014 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-19 07:09:01.805019 | orchestrator | Friday 19 September 2025 07:08:08 +0000 (0:00:05.946) 0:05:29.744 ****** 2025-09-19 07:09:01.805025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.805034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.805040 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.805074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.805081 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.805099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 07:09:01.805112 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805120 | orchestrator | 2025-09-19 07:09:01.805129 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-19 07:09:01.805138 | orchestrator | Friday 19 September 2025 07:08:09 +0000 (0:00:00.696) 0:05:30.440 ****** 2025-09-19 07:09:01.805147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805188 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805215 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:09:01.805244 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805250 | orchestrator | 2025-09-19 07:09:01.805255 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-19 07:09:01.805261 | orchestrator | Friday 19 September 2025 07:08:10 +0000 (0:00:01.007) 0:05:31.448 ****** 2025-09-19 07:09:01.805267 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.805272 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.805278 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.805283 | orchestrator | 2025-09-19 07:09:01.805289 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-19 07:09:01.805295 | orchestrator | Friday 19 September 2025 07:08:12 +0000 (0:00:02.126) 0:05:33.574 ****** 2025-09-19 07:09:01.805300 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.805306 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.805311 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.805317 | orchestrator | 2025-09-19 07:09:01.805322 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-19 07:09:01.805328 | orchestrator | Friday 19 September 2025 07:08:14 +0000 (0:00:02.044) 0:05:35.619 ****** 2025-09-19 07:09:01.805334 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805339 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805345 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805355 | orchestrator | 2025-09-19 07:09:01.805360 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-19 07:09:01.805366 | orchestrator | Friday 19 September 2025 07:08:14 +0000 (0:00:00.328) 0:05:35.948 ****** 2025-09-19 07:09:01.805371 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805377 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805386 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805392 | orchestrator | 2025-09-19 07:09:01.805397 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-19 07:09:01.805403 | orchestrator | Friday 19 September 2025 07:08:15 +0000 (0:00:00.320) 0:05:36.268 ****** 2025-09-19 07:09:01.805409 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805414 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805420 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805425 | orchestrator | 2025-09-19 07:09:01.805431 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-19 07:09:01.805436 | orchestrator | Friday 19 September 2025 07:08:15 +0000 (0:00:00.332) 0:05:36.601 ****** 2025-09-19 07:09:01.805442 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805448 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805453 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805459 | orchestrator | 2025-09-19 07:09:01.805464 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-19 07:09:01.805470 | orchestrator | Friday 19 September 2025 07:08:16 +0000 (0:00:00.670) 0:05:37.271 ****** 2025-09-19 07:09:01.805475 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805481 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805486 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805492 | orchestrator | 2025-09-19 07:09:01.805497 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-19 07:09:01.805503 | orchestrator | Friday 19 September 2025 07:08:16 +0000 (0:00:00.333) 0:05:37.604 ****** 2025-09-19 07:09:01.805509 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805514 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805520 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805525 | orchestrator | 2025-09-19 07:09:01.805531 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-19 07:09:01.805536 | orchestrator | Friday 19 September 2025 07:08:16 +0000 (0:00:00.524) 0:05:38.129 ****** 2025-09-19 07:09:01.805542 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.805548 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.805553 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.805559 | orchestrator | 2025-09-19 07:09:01.805564 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-19 07:09:01.805573 | orchestrator | Friday 19 September 2025 07:08:17 +0000 (0:00:00.779) 0:05:38.909 ****** 2025-09-19 07:09:01.805579 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.805584 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.805590 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.805595 | orchestrator | 2025-09-19 07:09:01.805601 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-19 07:09:01.805607 | orchestrator | Friday 19 September 2025 07:08:18 +0000 (0:00:00.324) 0:05:39.233 ****** 2025-09-19 07:09:01.805612 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.805618 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.805623 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.805629 | orchestrator | 2025-09-19 07:09:01.805634 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-19 07:09:01.805640 | orchestrator | Friday 19 September 2025 07:08:18 +0000 (0:00:00.802) 0:05:40.036 ****** 2025-09-19 07:09:01.805645 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.805651 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.805656 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.805662 | orchestrator | 2025-09-19 07:09:01.805667 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-19 07:09:01.805677 | orchestrator | Friday 19 September 2025 07:08:19 +0000 (0:00:00.780) 0:05:40.816 ****** 2025-09-19 07:09:01.805683 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.805688 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.805694 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.805699 | orchestrator | 2025-09-19 07:09:01.805704 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-19 07:09:01.805710 | orchestrator | Friday 19 September 2025 07:08:20 +0000 (0:00:00.965) 0:05:41.782 ****** 2025-09-19 07:09:01.805716 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.805721 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.805727 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.805732 | orchestrator | 2025-09-19 07:09:01.805738 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-19 07:09:01.805743 | orchestrator | Friday 19 September 2025 07:08:30 +0000 (0:00:09.635) 0:05:51.418 ****** 2025-09-19 07:09:01.805749 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.805755 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.805760 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.805766 | orchestrator | 2025-09-19 07:09:01.805772 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-19 07:09:01.805777 | orchestrator | Friday 19 September 2025 07:08:31 +0000 (0:00:00.794) 0:05:52.212 ****** 2025-09-19 07:09:01.805783 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.805788 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.805794 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.805799 | orchestrator | 2025-09-19 07:09:01.805805 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-19 07:09:01.805811 | orchestrator | Friday 19 September 2025 07:08:43 +0000 (0:00:12.548) 0:06:04.761 ****** 2025-09-19 07:09:01.805816 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.805822 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.805827 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.805833 | orchestrator | 2025-09-19 07:09:01.805839 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-19 07:09:01.805844 | orchestrator | Friday 19 September 2025 07:08:44 +0000 (0:00:00.736) 0:06:05.497 ****** 2025-09-19 07:09:01.805850 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:09:01.805855 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:09:01.805861 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:09:01.805866 | orchestrator | 2025-09-19 07:09:01.805872 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-19 07:09:01.805878 | orchestrator | Friday 19 September 2025 07:08:54 +0000 (0:00:09.724) 0:06:15.221 ****** 2025-09-19 07:09:01.805884 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805889 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805895 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805900 | orchestrator | 2025-09-19 07:09:01.805909 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-19 07:09:01.805915 | orchestrator | Friday 19 September 2025 07:08:54 +0000 (0:00:00.325) 0:06:15.547 ****** 2025-09-19 07:09:01.805920 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805926 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805932 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805937 | orchestrator | 2025-09-19 07:09:01.805943 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-19 07:09:01.805948 | orchestrator | Friday 19 September 2025 07:08:54 +0000 (0:00:00.308) 0:06:15.856 ****** 2025-09-19 07:09:01.805954 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805959 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.805965 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.805970 | orchestrator | 2025-09-19 07:09:01.805976 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-19 07:09:01.805982 | orchestrator | Friday 19 September 2025 07:08:55 +0000 (0:00:00.302) 0:06:16.158 ****** 2025-09-19 07:09:01.805991 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.805997 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.806002 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.806008 | orchestrator | 2025-09-19 07:09:01.806014 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-19 07:09:01.806041 | orchestrator | Friday 19 September 2025 07:08:55 +0000 (0:00:00.570) 0:06:16.729 ****** 2025-09-19 07:09:01.806047 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.806054 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.806072 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.806078 | orchestrator | 2025-09-19 07:09:01.806084 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-19 07:09:01.806090 | orchestrator | Friday 19 September 2025 07:08:55 +0000 (0:00:00.364) 0:06:17.093 ****** 2025-09-19 07:09:01.806095 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:09:01.806101 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:09:01.806106 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:09:01.806112 | orchestrator | 2025-09-19 07:09:01.806118 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-19 07:09:01.806123 | orchestrator | Friday 19 September 2025 07:08:56 +0000 (0:00:00.308) 0:06:17.402 ****** 2025-09-19 07:09:01.806129 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.806135 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.806140 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.806146 | orchestrator | 2025-09-19 07:09:01.806152 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-19 07:09:01.806157 | orchestrator | Friday 19 September 2025 07:08:57 +0000 (0:00:01.058) 0:06:18.460 ****** 2025-09-19 07:09:01.806163 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:09:01.806168 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:09:01.806174 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:09:01.806179 | orchestrator | 2025-09-19 07:09:01.806185 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:09:01.806191 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 07:09:01.806213 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 07:09:01.806220 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 07:09:01.806225 | orchestrator | 2025-09-19 07:09:01.806231 | orchestrator | 2025-09-19 07:09:01.806237 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:09:01.806243 | orchestrator | Friday 19 September 2025 07:08:58 +0000 (0:00:01.026) 0:06:19.486 ****** 2025-09-19 07:09:01.806248 | orchestrator | =============================================================================== 2025-09-19 07:09:01.806254 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.55s 2025-09-19 07:09:01.806259 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.72s 2025-09-19 07:09:01.806265 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.64s 2025-09-19 07:09:01.806271 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.95s 2025-09-19 07:09:01.806276 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.16s 2025-09-19 07:09:01.806282 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.72s 2025-09-19 07:09:01.806287 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.71s 2025-09-19 07:09:01.806293 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.59s 2025-09-19 07:09:01.806298 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.40s 2025-09-19 07:09:01.806308 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.36s 2025-09-19 07:09:01.806314 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.28s 2025-09-19 07:09:01.806320 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.10s 2025-09-19 07:09:01.806325 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.06s 2025-09-19 07:09:01.806331 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.96s 2025-09-19 07:09:01.806337 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.95s 2025-09-19 07:09:01.806342 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.87s 2025-09-19 07:09:01.806348 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 3.78s 2025-09-19 07:09:01.806353 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.78s 2025-09-19 07:09:01.806364 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.74s 2025-09-19 07:09:01.806370 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.58s 2025-09-19 07:09:01.806376 | orchestrator | 2025-09-19 07:09:01 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:01.806382 | orchestrator | 2025-09-19 07:09:01 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:01.806387 | orchestrator | 2025-09-19 07:09:01 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:01.806393 | orchestrator | 2025-09-19 07:09:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:04.824108 | orchestrator | 2025-09-19 07:09:04 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:04.825443 | orchestrator | 2025-09-19 07:09:04 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:04.826889 | orchestrator | 2025-09-19 07:09:04 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:04.827214 | orchestrator | 2025-09-19 07:09:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:07.863990 | orchestrator | 2025-09-19 07:09:07 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:07.864281 | orchestrator | 2025-09-19 07:09:07 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:07.865938 | orchestrator | 2025-09-19 07:09:07 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:07.866226 | orchestrator | 2025-09-19 07:09:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:10.898406 | orchestrator | 2025-09-19 07:09:10 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:10.903541 | orchestrator | 2025-09-19 07:09:10 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:10.903582 | orchestrator | 2025-09-19 07:09:10 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:10.903594 | orchestrator | 2025-09-19 07:09:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:13.938064 | orchestrator | 2025-09-19 07:09:13 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:13.940592 | orchestrator | 2025-09-19 07:09:13 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:13.940616 | orchestrator | 2025-09-19 07:09:13 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:13.940625 | orchestrator | 2025-09-19 07:09:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:16.989845 | orchestrator | 2025-09-19 07:09:16 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:16.990401 | orchestrator | 2025-09-19 07:09:16 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:16.993512 | orchestrator | 2025-09-19 07:09:16 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:16.993574 | orchestrator | 2025-09-19 07:09:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:20.026421 | orchestrator | 2025-09-19 07:09:20 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:20.026543 | orchestrator | 2025-09-19 07:09:20 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:20.027473 | orchestrator | 2025-09-19 07:09:20 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:20.027502 | orchestrator | 2025-09-19 07:09:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:23.050547 | orchestrator | 2025-09-19 07:09:23 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:23.050734 | orchestrator | 2025-09-19 07:09:23 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:23.051256 | orchestrator | 2025-09-19 07:09:23 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:23.051280 | orchestrator | 2025-09-19 07:09:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:26.094864 | orchestrator | 2025-09-19 07:09:26 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:26.095306 | orchestrator | 2025-09-19 07:09:26 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:26.097831 | orchestrator | 2025-09-19 07:09:26 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:26.097940 | orchestrator | 2025-09-19 07:09:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:29.126835 | orchestrator | 2025-09-19 07:09:29 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:29.127792 | orchestrator | 2025-09-19 07:09:29 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:29.128988 | orchestrator | 2025-09-19 07:09:29 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:29.129074 | orchestrator | 2025-09-19 07:09:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:32.192180 | orchestrator | 2025-09-19 07:09:32 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:32.196835 | orchestrator | 2025-09-19 07:09:32 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:32.198511 | orchestrator | 2025-09-19 07:09:32 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:32.199001 | orchestrator | 2025-09-19 07:09:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:35.228337 | orchestrator | 2025-09-19 07:09:35 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:35.229083 | orchestrator | 2025-09-19 07:09:35 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:35.230999 | orchestrator | 2025-09-19 07:09:35 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:35.231037 | orchestrator | 2025-09-19 07:09:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:38.274693 | orchestrator | 2025-09-19 07:09:38 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:38.276308 | orchestrator | 2025-09-19 07:09:38 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:38.278369 | orchestrator | 2025-09-19 07:09:38 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:38.278409 | orchestrator | 2025-09-19 07:09:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:41.314257 | orchestrator | 2025-09-19 07:09:41 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:41.314360 | orchestrator | 2025-09-19 07:09:41 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:41.315352 | orchestrator | 2025-09-19 07:09:41 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:41.315374 | orchestrator | 2025-09-19 07:09:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:44.352548 | orchestrator | 2025-09-19 07:09:44 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:44.352933 | orchestrator | 2025-09-19 07:09:44 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:44.354141 | orchestrator | 2025-09-19 07:09:44 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:44.354447 | orchestrator | 2025-09-19 07:09:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:47.398198 | orchestrator | 2025-09-19 07:09:47 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:47.400156 | orchestrator | 2025-09-19 07:09:47 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:47.401986 | orchestrator | 2025-09-19 07:09:47 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:47.402571 | orchestrator | 2025-09-19 07:09:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:50.436984 | orchestrator | 2025-09-19 07:09:50 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:50.437406 | orchestrator | 2025-09-19 07:09:50 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:50.438364 | orchestrator | 2025-09-19 07:09:50 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:50.438444 | orchestrator | 2025-09-19 07:09:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:53.479222 | orchestrator | 2025-09-19 07:09:53 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:53.481324 | orchestrator | 2025-09-19 07:09:53 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:53.483853 | orchestrator | 2025-09-19 07:09:53 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:53.484143 | orchestrator | 2025-09-19 07:09:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:56.527088 | orchestrator | 2025-09-19 07:09:56 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:56.529647 | orchestrator | 2025-09-19 07:09:56 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:56.531543 | orchestrator | 2025-09-19 07:09:56 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:56.531837 | orchestrator | 2025-09-19 07:09:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:59.567604 | orchestrator | 2025-09-19 07:09:59 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:09:59.569212 | orchestrator | 2025-09-19 07:09:59 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:09:59.570410 | orchestrator | 2025-09-19 07:09:59 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:09:59.570444 | orchestrator | 2025-09-19 07:09:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:02.604890 | orchestrator | 2025-09-19 07:10:02 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:02.607852 | orchestrator | 2025-09-19 07:10:02 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:02.607896 | orchestrator | 2025-09-19 07:10:02 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:02.607929 | orchestrator | 2025-09-19 07:10:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:05.654542 | orchestrator | 2025-09-19 07:10:05 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:05.655515 | orchestrator | 2025-09-19 07:10:05 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:05.657761 | orchestrator | 2025-09-19 07:10:05 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:05.658160 | orchestrator | 2025-09-19 07:10:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:08.698886 | orchestrator | 2025-09-19 07:10:08 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:08.700558 | orchestrator | 2025-09-19 07:10:08 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:08.701763 | orchestrator | 2025-09-19 07:10:08 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:08.701971 | orchestrator | 2025-09-19 07:10:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:11.746804 | orchestrator | 2025-09-19 07:10:11 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:11.748183 | orchestrator | 2025-09-19 07:10:11 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:11.749706 | orchestrator | 2025-09-19 07:10:11 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:11.749736 | orchestrator | 2025-09-19 07:10:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:14.798185 | orchestrator | 2025-09-19 07:10:14 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:14.802856 | orchestrator | 2025-09-19 07:10:14 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:14.806514 | orchestrator | 2025-09-19 07:10:14 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:14.806542 | orchestrator | 2025-09-19 07:10:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:17.854937 | orchestrator | 2025-09-19 07:10:17 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:17.855897 | orchestrator | 2025-09-19 07:10:17 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:17.857720 | orchestrator | 2025-09-19 07:10:17 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:17.857768 | orchestrator | 2025-09-19 07:10:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:20.908151 | orchestrator | 2025-09-19 07:10:20 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:20.909852 | orchestrator | 2025-09-19 07:10:20 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:20.912269 | orchestrator | 2025-09-19 07:10:20 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:20.912539 | orchestrator | 2025-09-19 07:10:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:23.949392 | orchestrator | 2025-09-19 07:10:23 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:23.951358 | orchestrator | 2025-09-19 07:10:23 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:23.953223 | orchestrator | 2025-09-19 07:10:23 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:23.953615 | orchestrator | 2025-09-19 07:10:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:26.995909 | orchestrator | 2025-09-19 07:10:26 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:26.997474 | orchestrator | 2025-09-19 07:10:26 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:26.999170 | orchestrator | 2025-09-19 07:10:26 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:26.999204 | orchestrator | 2025-09-19 07:10:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:30.052288 | orchestrator | 2025-09-19 07:10:30 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:30.054399 | orchestrator | 2025-09-19 07:10:30 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:30.055587 | orchestrator | 2025-09-19 07:10:30 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:30.056300 | orchestrator | 2025-09-19 07:10:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:33.095350 | orchestrator | 2025-09-19 07:10:33 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:33.096250 | orchestrator | 2025-09-19 07:10:33 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:33.098126 | orchestrator | 2025-09-19 07:10:33 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:33.098169 | orchestrator | 2025-09-19 07:10:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:36.136727 | orchestrator | 2025-09-19 07:10:36 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:36.137483 | orchestrator | 2025-09-19 07:10:36 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:36.139464 | orchestrator | 2025-09-19 07:10:36 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:36.139492 | orchestrator | 2025-09-19 07:10:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:39.187150 | orchestrator | 2025-09-19 07:10:39 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:39.188075 | orchestrator | 2025-09-19 07:10:39 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:39.189824 | orchestrator | 2025-09-19 07:10:39 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:39.189865 | orchestrator | 2025-09-19 07:10:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:42.223714 | orchestrator | 2025-09-19 07:10:42 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:42.224744 | orchestrator | 2025-09-19 07:10:42 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:42.226455 | orchestrator | 2025-09-19 07:10:42 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:42.226601 | orchestrator | 2025-09-19 07:10:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:45.271141 | orchestrator | 2025-09-19 07:10:45 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state STARTED 2025-09-19 07:10:45.271551 | orchestrator | 2025-09-19 07:10:45 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:45.273308 | orchestrator | 2025-09-19 07:10:45 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:45.273346 | orchestrator | 2025-09-19 07:10:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:48.328453 | orchestrator | 2025-09-19 07:10:48 | INFO  | Task ce92b06a-3f34-4e80-8af5-53f6ea5fbb1d is in state SUCCESS 2025-09-19 07:10:48.330550 | orchestrator | 2025-09-19 07:10:48.330592 | orchestrator | 2025-09-19 07:10:48.330604 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-19 07:10:48.330616 | orchestrator | 2025-09-19 07:10:48.330626 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 07:10:48.330637 | orchestrator | Friday 19 September 2025 07:00:11 +0000 (0:00:00.750) 0:00:00.750 ****** 2025-09-19 07:10:48.330649 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.330661 | orchestrator | 2025-09-19 07:10:48.330740 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 07:10:48.330752 | orchestrator | Friday 19 September 2025 07:00:12 +0000 (0:00:00.949) 0:00:01.699 ****** 2025-09-19 07:10:48.330763 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.330774 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.330902 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.330916 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.331799 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.331826 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.331836 | orchestrator | 2025-09-19 07:10:48.331847 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 07:10:48.331858 | orchestrator | Friday 19 September 2025 07:00:14 +0000 (0:00:01.794) 0:00:03.494 ****** 2025-09-19 07:10:48.331869 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.331878 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.331889 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.331898 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.331908 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.331918 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.331928 | orchestrator | 2025-09-19 07:10:48.331938 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 07:10:48.331971 | orchestrator | Friday 19 September 2025 07:00:14 +0000 (0:00:00.759) 0:00:04.254 ****** 2025-09-19 07:10:48.331981 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.331992 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.332002 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.332012 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.332022 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.332031 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.332041 | orchestrator | 2025-09-19 07:10:48.332052 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 07:10:48.332062 | orchestrator | Friday 19 September 2025 07:00:15 +0000 (0:00:01.008) 0:00:05.262 ****** 2025-09-19 07:10:48.333114 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.333139 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.333149 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.333159 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.333169 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.333179 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.333189 | orchestrator | 2025-09-19 07:10:48.333199 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 07:10:48.334086 | orchestrator | Friday 19 September 2025 07:00:16 +0000 (0:00:00.860) 0:00:06.123 ****** 2025-09-19 07:10:48.334112 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.334123 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.334134 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.334145 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.334156 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.334167 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.334178 | orchestrator | 2025-09-19 07:10:48.334189 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 07:10:48.334201 | orchestrator | Friday 19 September 2025 07:00:17 +0000 (0:00:00.781) 0:00:06.904 ****** 2025-09-19 07:10:48.334212 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.334223 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.334233 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.334244 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.334255 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.334266 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.334277 | orchestrator | 2025-09-19 07:10:48.334288 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 07:10:48.334299 | orchestrator | Friday 19 September 2025 07:00:18 +0000 (0:00:01.116) 0:00:08.020 ****** 2025-09-19 07:10:48.334311 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.334323 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.334334 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.334345 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.334356 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.334367 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.334378 | orchestrator | 2025-09-19 07:10:48.334389 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 07:10:48.334400 | orchestrator | Friday 19 September 2025 07:00:19 +0000 (0:00:00.992) 0:00:09.013 ****** 2025-09-19 07:10:48.334411 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.334422 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.334433 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.334444 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.334455 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.334466 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.334476 | orchestrator | 2025-09-19 07:10:48.334488 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 07:10:48.334499 | orchestrator | Friday 19 September 2025 07:00:20 +0000 (0:00:01.281) 0:00:10.295 ****** 2025-09-19 07:10:48.334510 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:10:48.334521 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:10:48.334532 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:10:48.334543 | orchestrator | 2025-09-19 07:10:48.334554 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 07:10:48.334565 | orchestrator | Friday 19 September 2025 07:00:21 +0000 (0:00:00.578) 0:00:10.874 ****** 2025-09-19 07:10:48.334576 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.334587 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.334689 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.334704 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.334716 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.334728 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.334741 | orchestrator | 2025-09-19 07:10:48.334849 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 07:10:48.334867 | orchestrator | Friday 19 September 2025 07:00:23 +0000 (0:00:01.948) 0:00:12.823 ****** 2025-09-19 07:10:48.334880 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:10:48.334893 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:10:48.334906 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:10:48.334932 | orchestrator | 2025-09-19 07:10:48.334967 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 07:10:48.334979 | orchestrator | Friday 19 September 2025 07:00:26 +0000 (0:00:02.804) 0:00:15.627 ****** 2025-09-19 07:10:48.334990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:10:48.335001 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:10:48.335012 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:10:48.335023 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.335034 | orchestrator | 2025-09-19 07:10:48.335045 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 07:10:48.335056 | orchestrator | Friday 19 September 2025 07:00:26 +0000 (0:00:00.698) 0:00:16.325 ****** 2025-09-19 07:10:48.335070 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.335085 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.335096 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.335121 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.335133 | orchestrator | 2025-09-19 07:10:48.335144 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 07:10:48.335155 | orchestrator | Friday 19 September 2025 07:00:27 +0000 (0:00:00.850) 0:00:17.175 ****** 2025-09-19 07:10:48.335168 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.335182 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.335194 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.335205 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.335217 | orchestrator | 2025-09-19 07:10:48.335228 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 07:10:48.335239 | orchestrator | Friday 19 September 2025 07:00:28 +0000 (0:00:00.550) 0:00:17.725 ****** 2025-09-19 07:10:48.335253 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 07:00:24.030973', 'end': '2025-09-19 07:00:24.301687', 'delta': '0:00:00.270714', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.335355 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 07:00:25.013754', 'end': '2025-09-19 07:00:25.288407', 'delta': '0:00:00.274653', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.335373 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 07:00:25.785768', 'end': '2025-09-19 07:00:26.056378', 'delta': '0:00:00.270610', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.335385 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.335397 | orchestrator | 2025-09-19 07:10:48.335408 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 07:10:48.335420 | orchestrator | Friday 19 September 2025 07:00:28 +0000 (0:00:00.256) 0:00:17.982 ****** 2025-09-19 07:10:48.335436 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.335448 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.335459 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.335470 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.335481 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.335492 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.335504 | orchestrator | 2025-09-19 07:10:48.335515 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 07:10:48.335526 | orchestrator | Friday 19 September 2025 07:00:30 +0000 (0:00:02.037) 0:00:20.019 ****** 2025-09-19 07:10:48.335538 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.335549 | orchestrator | 2025-09-19 07:10:48.335560 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 07:10:48.335571 | orchestrator | Friday 19 September 2025 07:00:31 +0000 (0:00:00.912) 0:00:20.931 ****** 2025-09-19 07:10:48.335582 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.335593 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.335604 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.335615 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.335627 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.335638 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.335649 | orchestrator | 2025-09-19 07:10:48.335660 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 07:10:48.335690 | orchestrator | Friday 19 September 2025 07:00:32 +0000 (0:00:01.247) 0:00:22.179 ****** 2025-09-19 07:10:48.335702 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.335714 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.335725 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.335736 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.335747 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.335766 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.335778 | orchestrator | 2025-09-19 07:10:48.335789 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 07:10:48.335800 | orchestrator | Friday 19 September 2025 07:00:34 +0000 (0:00:01.738) 0:00:23.918 ****** 2025-09-19 07:10:48.335811 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.335822 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.335834 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.335845 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.335856 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.335867 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.335878 | orchestrator | 2025-09-19 07:10:48.335889 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 07:10:48.335901 | orchestrator | Friday 19 September 2025 07:00:35 +0000 (0:00:00.895) 0:00:24.814 ****** 2025-09-19 07:10:48.335912 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.335923 | orchestrator | 2025-09-19 07:10:48.335935 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 07:10:48.335968 | orchestrator | Friday 19 September 2025 07:00:35 +0000 (0:00:00.124) 0:00:24.938 ****** 2025-09-19 07:10:48.335982 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.335995 | orchestrator | 2025-09-19 07:10:48.336008 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 07:10:48.336021 | orchestrator | Friday 19 September 2025 07:00:35 +0000 (0:00:00.275) 0:00:25.213 ****** 2025-09-19 07:10:48.336034 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.336046 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.336059 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.336071 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.336084 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.336097 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.336110 | orchestrator | 2025-09-19 07:10:48.336123 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 07:10:48.336213 | orchestrator | Friday 19 September 2025 07:00:36 +0000 (0:00:00.708) 0:00:25.922 ****** 2025-09-19 07:10:48.336230 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.336243 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.336256 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.336269 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.336281 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.336294 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.336307 | orchestrator | 2025-09-19 07:10:48.336320 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 07:10:48.336332 | orchestrator | Friday 19 September 2025 07:00:37 +0000 (0:00:00.664) 0:00:26.587 ****** 2025-09-19 07:10:48.336343 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.336354 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.336365 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.336377 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.336388 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.336399 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.336410 | orchestrator | 2025-09-19 07:10:48.336421 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 07:10:48.336432 | orchestrator | Friday 19 September 2025 07:00:37 +0000 (0:00:00.673) 0:00:27.260 ****** 2025-09-19 07:10:48.336444 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.336455 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.336466 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.336477 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.336488 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.336500 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.336511 | orchestrator | 2025-09-19 07:10:48.336522 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 07:10:48.336542 | orchestrator | Friday 19 September 2025 07:00:38 +0000 (0:00:00.986) 0:00:28.247 ****** 2025-09-19 07:10:48.336554 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.336565 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.336576 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.336587 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.336598 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.336609 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.336620 | orchestrator | 2025-09-19 07:10:48.336631 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 07:10:48.336643 | orchestrator | Friday 19 September 2025 07:00:39 +0000 (0:00:00.646) 0:00:28.893 ****** 2025-09-19 07:10:48.336660 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.336671 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.336682 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.336694 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.336705 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.336716 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.336727 | orchestrator | 2025-09-19 07:10:48.336738 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 07:10:48.336749 | orchestrator | Friday 19 September 2025 07:00:40 +0000 (0:00:00.763) 0:00:29.657 ****** 2025-09-19 07:10:48.336760 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.336771 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.336782 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.336793 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.336804 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.336815 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.336826 | orchestrator | 2025-09-19 07:10:48.336838 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 07:10:48.336849 | orchestrator | Friday 19 September 2025 07:00:41 +0000 (0:00:00.923) 0:00:30.581 ****** 2025-09-19 07:10:48.336861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.336873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.336885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.336897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.336995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71', 'scsi-SQEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part1', 'scsi-SQEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part14', 'scsi-SQEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part15', 'scsi-SQEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part16', 'scsi-SQEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.337198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.337230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f', 'scsi-SQEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part1', 'scsi-SQEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part14', 'scsi-SQEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part15', 'scsi-SQEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part16', 'scsi-SQEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.337411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.337436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337448 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.337459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--787edb9c--1668--5795--8146--b6ac8c49142c-osd--block--787edb9c--1668--5795--8146--b6ac8c49142c', 'dm-uuid-LVM-df8XvXdoHIGkJefp0HH7ZFWONVQKENIEH8wfeuA4imBqhnBxb1pYjK5IgKNUowlj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670', 'scsi-SQEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part1', 'scsi-SQEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part14', 'scsi-SQEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part15', 'scsi-SQEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part16', 'scsi-SQEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.337699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af475f18--71a6--5278--b018--36a08189cb1c-osd--block--af475f18--71a6--5278--b018--36a08189cb1c', 'dm-uuid-LVM-4pb1QPgTa7PYbQ2Pi1TxExoVZ2rv7oE0fQxtBLHrJrDVqmOhdo6Bx4lKLzXwEcrF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.337734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337780 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.337792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.337928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part1', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part14', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part15', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part16', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--787edb9c--1668--5795--8146--b6ac8c49142c-osd--block--787edb9c--1668--5795--8146--b6ac8c49142c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WGeVUS-N1Mf-BB3U-v4Ty-F8zL-2ouv-RgTscQ', 'scsi-0QEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c', 'scsi-SQEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--af475f18--71a6--5278--b018--36a08189cb1c-osd--block--af475f18--71a6--5278--b018--36a08189cb1c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6sv3aY-kbty-dkce-zN13-8qIJ-2Sck-zjAAQo', 'scsi-0QEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8', 'scsi-SQEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301', 'scsi-SQEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5631a8c0--2403--5b6d--b4ab--3f734fe52f75-osd--block--5631a8c0--2403--5b6d--b4ab--3f734fe52f75', 'dm-uuid-LVM-8FGxhz9XQMPcCWZM3pRrQdYdN4aupjGl8dI6hjzypij1bYPApneewuh1kDUkpKry'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--32fceb46--e08d--5445--84d6--a85b98e59ab0-osd--block--32fceb46--e08d--5445--84d6--a85b98e59ab0', 'dm-uuid-LVM-587HvxXipBJ4T3nrPgDJLDlXup2mDr2wuf3F1Fe4cf0wd8hu1mNB4rKs7oD1MKGi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338385 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.338396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338408 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.338419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part1', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part14', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part15', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part16', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5631a8c0--2403--5b6d--b4ab--3f734fe52f75-osd--block--5631a8c0--2403--5b6d--b4ab--3f734fe52f75'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ftiMBK-3syo-qzxd-buQ2-NTAu-qnjQ-3YjiVV', 'scsi-0QEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97', 'scsi-SQEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--32fceb46--e08d--5445--84d6--a85b98e59ab0-osd--block--32fceb46--e08d--5445--84d6--a85b98e59ab0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qdrrtu-Epqe-kEGe-GCqz-8pei-2gK0-ll8Cgo', 'scsi-0QEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508', 'scsi-SQEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b', 'scsi-SQEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2af2e838--b751--5a2f--ab09--cbc0dc745073-osd--block--2af2e838--b751--5a2f--ab09--cbc0dc745073', 'dm-uuid-LVM-stnS00GaKqmnkIfk0RfxskLg1ZJTWmtFpfznfUsoNpRCwb8nwwfI6Oqo6xQHFpUa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03228564--3151--5027--920d--737061be0eca-osd--block--03228564--3151--5027--920d--737061be0eca', 'dm-uuid-LVM-eI6w1uc0XkNtnqpOQjt0bpJDUwBAvRDMkQ65lj4tyaEBdNJzRpKBEpWbpQ4ys0Zz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338724 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.338735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:10:48.338888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part1', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part14', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part15', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part16', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2af2e838--b751--5a2f--ab09--cbc0dc745073-osd--block--2af2e838--b751--5a2f--ab09--cbc0dc745073'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Yyfqwl-HK9C-vUWq-ezQ3-J1x4-v9wL-Z7Zvjt', 'scsi-0QEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2', 'scsi-SQEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--03228564--3151--5027--920d--737061be0eca-osd--block--03228564--3151--5027--920d--737061be0eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4gNePi-p6bZ-PnsU-Kexi-wYB8-ohCZ-z8YGsJ', 'scsi-0QEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39', 'scsi-SQEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528', 'scsi-SQEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.338966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:10:48.339037 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.339051 | orchestrator | 2025-09-19 07:10:48.339061 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 07:10:48.339072 | orchestrator | Friday 19 September 2025 07:00:42 +0000 (0:00:01.122) 0:00:31.704 ****** 2025-09-19 07:10:48.339083 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339094 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339110 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339131 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339142 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339153 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339239 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339272 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71', 'scsi-SQEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part1', 'scsi-SQEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part14', 'scsi-SQEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part15', 'scsi-SQEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part16', 'scsi-SQEMU_QEMU_HARDDISK_89cbc581-c97b-43be-9e42-34404cedab71-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339350 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339365 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339376 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339391 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339408 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339418 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339429 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339500 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339531 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339542 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.339559 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f', 'scsi-SQEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part1', 'scsi-SQEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part14', 'scsi-SQEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part15', 'scsi-SQEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part16', 'scsi-SQEMU_QEMU_HARDDISK_fa7bcb17-5b80-45db-868e-e545200cc85f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339577 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339650 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339665 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339680 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339697 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339707 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339718 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339787 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339816 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339834 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670', 'scsi-SQEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part1', 'scsi-SQEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part14', 'scsi-SQEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part15', 'scsi-SQEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part16', 'scsi-SQEMU_QEMU_HARDDISK_24b63ec3-2727-4f55-a7d9-4b9cf8404670-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339852 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339863 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.339935 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--787edb9c--1668--5795--8146--b6ac8c49142c-osd--block--787edb9c--1668--5795--8146--b6ac8c49142c', 'dm-uuid-LVM-df8XvXdoHIGkJefp0HH7ZFWONVQKENIEH8wfeuA4imBqhnBxb1pYjK5IgKNUowlj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af475f18--71a6--5278--b018--36a08189cb1c-osd--block--af475f18--71a6--5278--b018--36a08189cb1c', 'dm-uuid-LVM-4pb1QPgTa7PYbQ2Pi1TxExoVZ2rv7oE0fQxtBLHrJrDVqmOhdo6Bx4lKLzXwEcrF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.339988 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.340004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340015 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340025 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340036 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340120 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340154 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5631a8c0--2403--5b6d--b4ab--3f734fe52f75-osd--block--5631a8c0--2403--5b6d--b4ab--3f734fe52f75', 'dm-uuid-LVM-8FGxhz9XQMPcCWZM3pRrQdYdN4aupjGl8dI6hjzypij1bYPApneewuh1kDUkpKry'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340253 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part1', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part14', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part15', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part16', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340285 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--787edb9c--1668--5795--8146--b6ac8c49142c-osd--block--787edb9c--1668--5795--8146--b6ac8c49142c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WGeVUS-N1Mf-BB3U-v4Ty-F8zL-2ouv-RgTscQ', 'scsi-0QEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c', 'scsi-SQEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340297 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--32fceb46--e08d--5445--84d6--a85b98e59ab0-osd--block--32fceb46--e08d--5445--84d6--a85b98e59ab0', 'dm-uuid-LVM-587HvxXipBJ4T3nrPgDJLDlXup2mDr2wuf3F1Fe4cf0wd8hu1mNB4rKs7oD1MKGi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--af475f18--71a6--5278--b018--36a08189cb1c-osd--block--af475f18--71a6--5278--b018--36a08189cb1c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6sv3aY-kbty-dkce-zN13-8qIJ-2Sck-zjAAQo', 'scsi-0QEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8', 'scsi-SQEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340379 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340394 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301', 'scsi-SQEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340450 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340473 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340483 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.340493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340562 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340576 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340594 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340610 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part1', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part14', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part15', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part16', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340721 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5631a8c0--2403--5b6d--b4ab--3f734fe52f75-osd--block--5631a8c0--2403--5b6d--b4ab--3f734fe52f75'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ftiMBK-3syo-qzxd-buQ2-NTAu-qnjQ-3YjiVV', 'scsi-0QEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97', 'scsi-SQEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340738 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--32fceb46--e08d--5445--84d6--a85b98e59ab0-osd--block--32fceb46--e08d--5445--84d6--a85b98e59ab0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qdrrtu-Epqe-kEGe-GCqz-8pei-2gK0-ll8Cgo', 'scsi-0QEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508', 'scsi-SQEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340749 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b', 'scsi-SQEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340759 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340769 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.340840 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2af2e838--b751--5a2f--ab09--cbc0dc745073-osd--block--2af2e838--b751--5a2f--ab09--cbc0dc745073', 'dm-uuid-LVM-stnS00GaKqmnkIfk0RfxskLg1ZJTWmtFpfznfUsoNpRCwb8nwwfI6Oqo6xQHFpUa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03228564--3151--5027--920d--737061be0eca-osd--block--03228564--3151--5027--920d--737061be0eca', 'dm-uuid-LVM-eI6w1uc0XkNtnqpOQjt0bpJDUwBAvRDMkQ65lj4tyaEBdNJzRpKBEpWbpQ4ys0Zz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.340910 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.341053 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.341080 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.341091 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.341108 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.341181 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part1', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part14', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part15', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part16', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.341204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2af2e838--b751--5a2f--ab09--cbc0dc745073-osd--block--2af2e838--b751--5a2f--ab09--cbc0dc745073'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Yyfqwl-HK9C-vUWq-ezQ3-J1x4-v9wL-Z7Zvjt', 'scsi-0QEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2', 'scsi-SQEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.341216 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--03228564--3151--5027--920d--737061be0eca-osd--block--03228564--3151--5027--920d--737061be0eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4gNePi-p6bZ-PnsU-Kexi-wYB8-ohCZ-z8YGsJ', 'scsi-0QEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39', 'scsi-SQEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.341227 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528', 'scsi-SQEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.341237 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:10:48.341255 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.341265 | orchestrator | 2025-09-19 07:10:48.341275 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 07:10:48.341285 | orchestrator | Friday 19 September 2025 07:00:43 +0000 (0:00:01.116) 0:00:32.820 ****** 2025-09-19 07:10:48.341294 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.341302 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.341310 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.341376 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.341388 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.341396 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.341404 | orchestrator | 2025-09-19 07:10:48.341412 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 07:10:48.341421 | orchestrator | Friday 19 September 2025 07:00:44 +0000 (0:00:01.420) 0:00:34.240 ****** 2025-09-19 07:10:48.341429 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.341437 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.341445 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.341453 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.341461 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.341469 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.341477 | orchestrator | 2025-09-19 07:10:48.341486 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 07:10:48.341494 | orchestrator | Friday 19 September 2025 07:00:45 +0000 (0:00:00.629) 0:00:34.869 ****** 2025-09-19 07:10:48.341502 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.341510 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.341518 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.341526 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.341534 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.341543 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.341551 | orchestrator | 2025-09-19 07:10:48.341559 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 07:10:48.341598 | orchestrator | Friday 19 September 2025 07:00:46 +0000 (0:00:01.161) 0:00:36.031 ****** 2025-09-19 07:10:48.341607 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.341615 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.341623 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.341631 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.341639 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.341647 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.341666 | orchestrator | 2025-09-19 07:10:48.341674 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 07:10:48.341683 | orchestrator | Friday 19 September 2025 07:00:47 +0000 (0:00:00.848) 0:00:36.880 ****** 2025-09-19 07:10:48.341691 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.341699 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.341706 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.341714 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.341722 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.341730 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.341738 | orchestrator | 2025-09-19 07:10:48.341754 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 07:10:48.341762 | orchestrator | Friday 19 September 2025 07:00:48 +0000 (0:00:00.996) 0:00:37.877 ****** 2025-09-19 07:10:48.341770 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.341778 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.341786 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.341794 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.341802 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.341811 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.341819 | orchestrator | 2025-09-19 07:10:48.341827 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 07:10:48.341843 | orchestrator | Friday 19 September 2025 07:00:49 +0000 (0:00:01.106) 0:00:38.983 ****** 2025-09-19 07:10:48.341852 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:10:48.341860 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-19 07:10:48.341868 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-19 07:10:48.341876 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 07:10:48.341884 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-19 07:10:48.341892 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 07:10:48.341900 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-19 07:10:48.341907 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-19 07:10:48.341915 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 07:10:48.341923 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 07:10:48.341931 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 07:10:48.341939 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-19 07:10:48.341963 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 07:10:48.341971 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 07:10:48.341979 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 07:10:48.341987 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 07:10:48.341995 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 07:10:48.342004 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 07:10:48.342013 | orchestrator | 2025-09-19 07:10:48.342058 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 07:10:48.342068 | orchestrator | Friday 19 September 2025 07:00:53 +0000 (0:00:03.863) 0:00:42.847 ****** 2025-09-19 07:10:48.342078 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:10:48.342086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:10:48.342094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:10:48.342102 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.342110 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 07:10:48.342118 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 07:10:48.342126 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 07:10:48.342134 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 07:10:48.342142 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.342150 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 07:10:48.342159 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 07:10:48.342167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 07:10:48.342204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 07:10:48.342213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 07:10:48.342221 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.342230 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.342238 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 07:10:48.342246 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 07:10:48.342254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 07:10:48.342262 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 07:10:48.342270 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.342278 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 07:10:48.342286 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 07:10:48.342294 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.342302 | orchestrator | 2025-09-19 07:10:48.342311 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 07:10:48.342325 | orchestrator | Friday 19 September 2025 07:00:54 +0000 (0:00:00.998) 0:00:43.845 ****** 2025-09-19 07:10:48.342333 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.342341 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.342350 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.342358 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.342366 | orchestrator | 2025-09-19 07:10:48.342374 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 07:10:48.342383 | orchestrator | Friday 19 September 2025 07:00:55 +0000 (0:00:01.090) 0:00:44.935 ****** 2025-09-19 07:10:48.342391 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.342399 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.342407 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.342415 | orchestrator | 2025-09-19 07:10:48.342423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 07:10:48.342431 | orchestrator | Friday 19 September 2025 07:00:55 +0000 (0:00:00.449) 0:00:45.385 ****** 2025-09-19 07:10:48.342439 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.342447 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.342460 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.342468 | orchestrator | 2025-09-19 07:10:48.342476 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 07:10:48.342484 | orchestrator | Friday 19 September 2025 07:00:56 +0000 (0:00:00.641) 0:00:46.027 ****** 2025-09-19 07:10:48.342493 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.342501 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.342509 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.342517 | orchestrator | 2025-09-19 07:10:48.342525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 07:10:48.342533 | orchestrator | Friday 19 September 2025 07:00:56 +0000 (0:00:00.400) 0:00:46.427 ****** 2025-09-19 07:10:48.342541 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.342549 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.342557 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.342565 | orchestrator | 2025-09-19 07:10:48.342573 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 07:10:48.342582 | orchestrator | Friday 19 September 2025 07:00:57 +0000 (0:00:00.636) 0:00:47.063 ****** 2025-09-19 07:10:48.342590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.342598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.342606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.342614 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.342622 | orchestrator | 2025-09-19 07:10:48.342630 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 07:10:48.342638 | orchestrator | Friday 19 September 2025 07:00:58 +0000 (0:00:00.446) 0:00:47.510 ****** 2025-09-19 07:10:48.342646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.342654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.342662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.342670 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.342678 | orchestrator | 2025-09-19 07:10:48.342686 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 07:10:48.342695 | orchestrator | Friday 19 September 2025 07:00:59 +0000 (0:00:00.962) 0:00:48.472 ****** 2025-09-19 07:10:48.342703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.342711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.342719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.342727 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.342741 | orchestrator | 2025-09-19 07:10:48.342749 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 07:10:48.342757 | orchestrator | Friday 19 September 2025 07:00:59 +0000 (0:00:00.632) 0:00:49.105 ****** 2025-09-19 07:10:48.342765 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.342773 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.342781 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.342789 | orchestrator | 2025-09-19 07:10:48.342797 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 07:10:48.342805 | orchestrator | Friday 19 September 2025 07:01:00 +0000 (0:00:00.888) 0:00:49.994 ****** 2025-09-19 07:10:48.342814 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 07:10:48.342822 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 07:10:48.342830 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 07:10:48.342838 | orchestrator | 2025-09-19 07:10:48.342846 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 07:10:48.342854 | orchestrator | Friday 19 September 2025 07:01:01 +0000 (0:00:01.377) 0:00:51.371 ****** 2025-09-19 07:10:48.342884 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:10:48.342893 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:10:48.342901 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:10:48.342909 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-19 07:10:48.342917 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 07:10:48.342925 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 07:10:48.342933 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 07:10:48.342941 | orchestrator | 2025-09-19 07:10:48.342964 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 07:10:48.342973 | orchestrator | Friday 19 September 2025 07:01:02 +0000 (0:00:01.043) 0:00:52.415 ****** 2025-09-19 07:10:48.342981 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:10:48.342989 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:10:48.342997 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:10:48.343005 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-19 07:10:48.343013 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 07:10:48.343021 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 07:10:48.343029 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 07:10:48.343037 | orchestrator | 2025-09-19 07:10:48.343045 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:10:48.343053 | orchestrator | Friday 19 September 2025 07:01:05 +0000 (0:00:02.794) 0:00:55.210 ****** 2025-09-19 07:10:48.343066 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.343075 | orchestrator | 2025-09-19 07:10:48.343083 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:10:48.343091 | orchestrator | Friday 19 September 2025 07:01:07 +0000 (0:00:01.689) 0:00:56.899 ****** 2025-09-19 07:10:48.343099 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.343108 | orchestrator | 2025-09-19 07:10:48.343116 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:10:48.343130 | orchestrator | Friday 19 September 2025 07:01:09 +0000 (0:00:01.786) 0:00:58.686 ****** 2025-09-19 07:10:48.343138 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.343146 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.343154 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.343163 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.343171 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.343179 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.343187 | orchestrator | 2025-09-19 07:10:48.343195 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:10:48.343203 | orchestrator | Friday 19 September 2025 07:01:11 +0000 (0:00:02.450) 0:01:01.137 ****** 2025-09-19 07:10:48.343211 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.343219 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.343227 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.343235 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.343243 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.343251 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.343259 | orchestrator | 2025-09-19 07:10:48.343267 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:10:48.343275 | orchestrator | Friday 19 September 2025 07:01:13 +0000 (0:00:01.630) 0:01:02.767 ****** 2025-09-19 07:10:48.343283 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.343292 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.343300 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.343308 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.343316 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.343324 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.343332 | orchestrator | 2025-09-19 07:10:48.343340 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:10:48.343348 | orchestrator | Friday 19 September 2025 07:01:15 +0000 (0:00:01.825) 0:01:04.593 ****** 2025-09-19 07:10:48.343356 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.343364 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.343372 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.343380 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.343389 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.343397 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.343405 | orchestrator | 2025-09-19 07:10:48.343413 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:10:48.343421 | orchestrator | Friday 19 September 2025 07:01:16 +0000 (0:00:01.072) 0:01:05.666 ****** 2025-09-19 07:10:48.343429 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.343437 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.343445 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.343453 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.343461 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.343469 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.343477 | orchestrator | 2025-09-19 07:10:48.343485 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:10:48.343493 | orchestrator | Friday 19 September 2025 07:01:17 +0000 (0:00:00.859) 0:01:06.525 ****** 2025-09-19 07:10:48.343523 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.343533 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.343541 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.343549 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.343557 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.343565 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.343573 | orchestrator | 2025-09-19 07:10:48.343581 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:10:48.343589 | orchestrator | Friday 19 September 2025 07:01:18 +0000 (0:00:00.955) 0:01:07.480 ****** 2025-09-19 07:10:48.343597 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.343605 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.343614 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.343626 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.343634 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.343642 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.343650 | orchestrator | 2025-09-19 07:10:48.343658 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:10:48.343666 | orchestrator | Friday 19 September 2025 07:01:19 +0000 (0:00:01.044) 0:01:08.525 ****** 2025-09-19 07:10:48.343674 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.343682 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.343690 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.343698 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.343706 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.343714 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.343722 | orchestrator | 2025-09-19 07:10:48.343730 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:10:48.343738 | orchestrator | Friday 19 September 2025 07:01:20 +0000 (0:00:01.446) 0:01:09.971 ****** 2025-09-19 07:10:48.343746 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.343754 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.343762 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.343770 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.343778 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.343786 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.343794 | orchestrator | 2025-09-19 07:10:48.343802 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:10:48.343810 | orchestrator | Friday 19 September 2025 07:01:22 +0000 (0:00:01.634) 0:01:11.606 ****** 2025-09-19 07:10:48.343818 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.343826 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.343834 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.343842 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.343854 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.343862 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.343870 | orchestrator | 2025-09-19 07:10:48.343878 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:10:48.343886 | orchestrator | Friday 19 September 2025 07:01:22 +0000 (0:00:00.672) 0:01:12.279 ****** 2025-09-19 07:10:48.343894 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.343903 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.343911 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.343919 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.343927 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.343935 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.343943 | orchestrator | 2025-09-19 07:10:48.344002 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:10:48.344010 | orchestrator | Friday 19 September 2025 07:01:23 +0000 (0:00:00.879) 0:01:13.158 ****** 2025-09-19 07:10:48.344018 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.344026 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.344034 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.344042 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.344050 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.344058 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.344067 | orchestrator | 2025-09-19 07:10:48.344075 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:10:48.344083 | orchestrator | Friday 19 September 2025 07:01:24 +0000 (0:00:00.670) 0:01:13.828 ****** 2025-09-19 07:10:48.344091 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.344099 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.344107 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.344115 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.344123 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.344131 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.344140 | orchestrator | 2025-09-19 07:10:48.344148 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:10:48.344161 | orchestrator | Friday 19 September 2025 07:01:25 +0000 (0:00:00.924) 0:01:14.752 ****** 2025-09-19 07:10:48.344170 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.344178 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.344186 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.344194 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.344202 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.344210 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.344218 | orchestrator | 2025-09-19 07:10:48.344227 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:10:48.344235 | orchestrator | Friday 19 September 2025 07:01:25 +0000 (0:00:00.620) 0:01:15.373 ****** 2025-09-19 07:10:48.344243 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.344251 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.344259 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.344267 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.344275 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.344283 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.344291 | orchestrator | 2025-09-19 07:10:48.344300 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:10:48.344308 | orchestrator | Friday 19 September 2025 07:01:26 +0000 (0:00:01.031) 0:01:16.405 ****** 2025-09-19 07:10:48.344316 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.344324 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.344332 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.344340 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.344348 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.344356 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.344364 | orchestrator | 2025-09-19 07:10:48.344372 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:10:48.344404 | orchestrator | Friday 19 September 2025 07:01:28 +0000 (0:00:01.365) 0:01:17.770 ****** 2025-09-19 07:10:48.344414 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.344422 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.344430 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.344438 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.344446 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.344454 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.344461 | orchestrator | 2025-09-19 07:10:48.344468 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:10:48.344475 | orchestrator | Friday 19 September 2025 07:01:29 +0000 (0:00:00.856) 0:01:18.627 ****** 2025-09-19 07:10:48.344482 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.344489 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.344496 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.344502 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.344509 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.344516 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.344523 | orchestrator | 2025-09-19 07:10:48.344530 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:10:48.344536 | orchestrator | Friday 19 September 2025 07:01:29 +0000 (0:00:00.797) 0:01:19.424 ****** 2025-09-19 07:10:48.344543 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.344550 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.344557 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.344564 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.344570 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.344577 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.344584 | orchestrator | 2025-09-19 07:10:48.344591 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-19 07:10:48.344598 | orchestrator | Friday 19 September 2025 07:01:31 +0000 (0:00:01.415) 0:01:20.840 ****** 2025-09-19 07:10:48.344604 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.344611 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.344622 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.344629 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.344636 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.344643 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.344650 | orchestrator | 2025-09-19 07:10:48.344657 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-19 07:10:48.344663 | orchestrator | Friday 19 September 2025 07:01:33 +0000 (0:00:02.389) 0:01:23.229 ****** 2025-09-19 07:10:48.344670 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.344677 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.344684 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.344694 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.344701 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.344708 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.344715 | orchestrator | 2025-09-19 07:10:48.344722 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-19 07:10:48.344729 | orchestrator | Friday 19 September 2025 07:01:35 +0000 (0:00:02.043) 0:01:25.272 ****** 2025-09-19 07:10:48.344736 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.344743 | orchestrator | 2025-09-19 07:10:48.344750 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-19 07:10:48.344757 | orchestrator | Friday 19 September 2025 07:01:36 +0000 (0:00:01.020) 0:01:26.292 ****** 2025-09-19 07:10:48.344763 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.344770 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.344777 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.344784 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.344790 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.344797 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.344804 | orchestrator | 2025-09-19 07:10:48.344811 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-19 07:10:48.344818 | orchestrator | Friday 19 September 2025 07:01:37 +0000 (0:00:00.656) 0:01:26.948 ****** 2025-09-19 07:10:48.344824 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.344831 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.344838 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.344844 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.344851 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.344858 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.344865 | orchestrator | 2025-09-19 07:10:48.344872 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-19 07:10:48.344878 | orchestrator | Friday 19 September 2025 07:01:38 +0000 (0:00:00.529) 0:01:27.478 ****** 2025-09-19 07:10:48.344885 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:10:48.344892 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:10:48.344899 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:10:48.344906 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:10:48.344912 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:10:48.344919 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:10:48.344926 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:10:48.344933 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:10:48.344940 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:10:48.344963 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:10:48.344978 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:10:48.344985 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:10:48.344992 | orchestrator | 2025-09-19 07:10:48.345018 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-19 07:10:48.345026 | orchestrator | Friday 19 September 2025 07:01:39 +0000 (0:00:01.377) 0:01:28.856 ****** 2025-09-19 07:10:48.345033 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.345040 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.345046 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.345053 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.345060 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.345067 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.345074 | orchestrator | 2025-09-19 07:10:48.345080 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-19 07:10:48.345087 | orchestrator | Friday 19 September 2025 07:01:40 +0000 (0:00:00.934) 0:01:29.790 ****** 2025-09-19 07:10:48.345094 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.345101 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.345108 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.345114 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.345121 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.345128 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.345135 | orchestrator | 2025-09-19 07:10:48.345142 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-19 07:10:48.345148 | orchestrator | Friday 19 September 2025 07:01:41 +0000 (0:00:00.648) 0:01:30.439 ****** 2025-09-19 07:10:48.345155 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.345162 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.345169 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.345175 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.345182 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.345189 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.345196 | orchestrator | 2025-09-19 07:10:48.345202 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-19 07:10:48.345209 | orchestrator | Friday 19 September 2025 07:01:41 +0000 (0:00:00.537) 0:01:30.977 ****** 2025-09-19 07:10:48.345216 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.345223 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.345229 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.345236 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.345243 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.345249 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.345256 | orchestrator | 2025-09-19 07:10:48.345263 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-19 07:10:48.345273 | orchestrator | Friday 19 September 2025 07:01:42 +0000 (0:00:00.632) 0:01:31.609 ****** 2025-09-19 07:10:48.345280 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.345287 | orchestrator | 2025-09-19 07:10:48.345294 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-19 07:10:48.345301 | orchestrator | Friday 19 September 2025 07:01:43 +0000 (0:00:01.054) 0:01:32.663 ****** 2025-09-19 07:10:48.345308 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.345314 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.345321 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.345328 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.345334 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.345341 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.345348 | orchestrator | 2025-09-19 07:10:48.345355 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-19 07:10:48.345366 | orchestrator | Friday 19 September 2025 07:02:30 +0000 (0:00:47.439) 0:02:20.103 ****** 2025-09-19 07:10:48.345373 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:10:48.345380 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:10:48.345387 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:10:48.345394 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.345401 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:10:48.345407 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:10:48.345414 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:10:48.345421 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.345428 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:10:48.345434 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:10:48.345441 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:10:48.345448 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.345455 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:10:48.345461 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:10:48.345468 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:10:48.345475 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.345482 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:10:48.345488 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:10:48.345495 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:10:48.345502 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.345508 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:10:48.345515 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:10:48.345522 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:10:48.345546 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.345554 | orchestrator | 2025-09-19 07:10:48.345561 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-19 07:10:48.345568 | orchestrator | Friday 19 September 2025 07:02:31 +0000 (0:00:00.866) 0:02:20.969 ****** 2025-09-19 07:10:48.345575 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.345582 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.345588 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.345595 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.345602 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.345609 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.345616 | orchestrator | 2025-09-19 07:10:48.345622 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-19 07:10:48.345629 | orchestrator | Friday 19 September 2025 07:02:32 +0000 (0:00:00.593) 0:02:21.562 ****** 2025-09-19 07:10:48.345636 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.345643 | orchestrator | 2025-09-19 07:10:48.345650 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-19 07:10:48.345657 | orchestrator | Friday 19 September 2025 07:02:32 +0000 (0:00:00.156) 0:02:21.718 ****** 2025-09-19 07:10:48.345664 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.345670 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.345677 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.345684 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.345691 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.345698 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.345712 | orchestrator | 2025-09-19 07:10:48.345719 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-19 07:10:48.345725 | orchestrator | Friday 19 September 2025 07:02:33 +0000 (0:00:00.889) 0:02:22.608 ****** 2025-09-19 07:10:48.345732 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.345739 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.345746 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.345752 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.345759 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.345766 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.345773 | orchestrator | 2025-09-19 07:10:48.345780 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-19 07:10:48.345786 | orchestrator | Friday 19 September 2025 07:02:33 +0000 (0:00:00.658) 0:02:23.266 ****** 2025-09-19 07:10:48.345793 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.345800 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.345810 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.345817 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.345824 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.345831 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.345837 | orchestrator | 2025-09-19 07:10:48.345844 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-19 07:10:48.345851 | orchestrator | Friday 19 September 2025 07:02:34 +0000 (0:00:00.825) 0:02:24.091 ****** 2025-09-19 07:10:48.345858 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.345865 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.345872 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.345878 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.345885 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.345892 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.345899 | orchestrator | 2025-09-19 07:10:48.345906 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-19 07:10:48.345913 | orchestrator | Friday 19 September 2025 07:02:36 +0000 (0:00:02.049) 0:02:26.141 ****** 2025-09-19 07:10:48.345919 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.345926 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.345933 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.345939 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.345959 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.345966 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.345973 | orchestrator | 2025-09-19 07:10:48.345980 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-19 07:10:48.345987 | orchestrator | Friday 19 September 2025 07:02:37 +0000 (0:00:00.975) 0:02:27.117 ****** 2025-09-19 07:10:48.345994 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.346002 | orchestrator | 2025-09-19 07:10:48.346009 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-19 07:10:48.346037 | orchestrator | Friday 19 September 2025 07:02:38 +0000 (0:00:01.273) 0:02:28.391 ****** 2025-09-19 07:10:48.346045 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.346052 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.346059 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.346066 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.346072 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.346079 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.346086 | orchestrator | 2025-09-19 07:10:48.346093 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-19 07:10:48.346100 | orchestrator | Friday 19 September 2025 07:02:39 +0000 (0:00:00.700) 0:02:29.091 ****** 2025-09-19 07:10:48.346107 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.346114 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.346121 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.346133 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.346140 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.346147 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.346153 | orchestrator | 2025-09-19 07:10:48.346160 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-19 07:10:48.346167 | orchestrator | Friday 19 September 2025 07:02:40 +0000 (0:00:00.753) 0:02:29.845 ****** 2025-09-19 07:10:48.346174 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.346181 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.346187 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.346194 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.346201 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.346208 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.346215 | orchestrator | 2025-09-19 07:10:48.346222 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-19 07:10:48.346250 | orchestrator | Friday 19 September 2025 07:02:40 +0000 (0:00:00.525) 0:02:30.370 ****** 2025-09-19 07:10:48.346258 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.346265 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.346271 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.346278 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.346285 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.346292 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.346299 | orchestrator | 2025-09-19 07:10:48.346306 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-19 07:10:48.346313 | orchestrator | Friday 19 September 2025 07:02:41 +0000 (0:00:00.682) 0:02:31.052 ****** 2025-09-19 07:10:48.346320 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.346327 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.346334 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.346340 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.346347 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.346354 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.346361 | orchestrator | 2025-09-19 07:10:48.346368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-19 07:10:48.346375 | orchestrator | Friday 19 September 2025 07:02:42 +0000 (0:00:00.584) 0:02:31.636 ****** 2025-09-19 07:10:48.346382 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.346389 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.346396 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.346402 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.346409 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.346416 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.346423 | orchestrator | 2025-09-19 07:10:48.346430 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-19 07:10:48.346436 | orchestrator | Friday 19 September 2025 07:02:42 +0000 (0:00:00.727) 0:02:32.364 ****** 2025-09-19 07:10:48.346443 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.346450 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.346457 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.346464 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.346470 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.346477 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.346484 | orchestrator | 2025-09-19 07:10:48.346491 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-19 07:10:48.346498 | orchestrator | Friday 19 September 2025 07:02:43 +0000 (0:00:00.579) 0:02:32.944 ****** 2025-09-19 07:10:48.346508 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.346515 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.346522 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.346529 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.346535 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.346547 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.346553 | orchestrator | 2025-09-19 07:10:48.346560 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-19 07:10:48.346567 | orchestrator | Friday 19 September 2025 07:02:44 +0000 (0:00:00.687) 0:02:33.631 ****** 2025-09-19 07:10:48.346574 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.346581 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.346588 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.346595 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.346601 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.346608 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.346615 | orchestrator | 2025-09-19 07:10:48.346622 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-19 07:10:48.346629 | orchestrator | Friday 19 September 2025 07:02:45 +0000 (0:00:01.047) 0:02:34.679 ****** 2025-09-19 07:10:48.346636 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.346643 | orchestrator | 2025-09-19 07:10:48.346650 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-19 07:10:48.346657 | orchestrator | Friday 19 September 2025 07:02:46 +0000 (0:00:00.945) 0:02:35.625 ****** 2025-09-19 07:10:48.346664 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-19 07:10:48.346670 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-19 07:10:48.346677 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-19 07:10:48.346684 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-19 07:10:48.346691 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-19 07:10:48.346698 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-19 07:10:48.346704 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-19 07:10:48.346711 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-19 07:10:48.346718 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-19 07:10:48.346725 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-19 07:10:48.346731 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-19 07:10:48.346738 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-19 07:10:48.346745 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-19 07:10:48.346752 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-19 07:10:48.346759 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-19 07:10:48.346765 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-19 07:10:48.346772 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-19 07:10:48.346779 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-19 07:10:48.346786 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-19 07:10:48.346792 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-19 07:10:48.346799 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-19 07:10:48.346824 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-19 07:10:48.346832 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-19 07:10:48.346838 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-19 07:10:48.346845 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-19 07:10:48.346852 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-19 07:10:48.346859 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-19 07:10:48.346865 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-19 07:10:48.346872 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-19 07:10:48.346879 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-19 07:10:48.346890 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-19 07:10:48.346897 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-19 07:10:48.346904 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-19 07:10:48.346910 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-19 07:10:48.346917 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-19 07:10:48.346924 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-19 07:10:48.346931 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-19 07:10:48.346937 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-19 07:10:48.346974 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-19 07:10:48.346982 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-19 07:10:48.346989 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-19 07:10:48.346995 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-19 07:10:48.347002 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:10:48.347009 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:10:48.347016 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:10:48.347022 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:10:48.347033 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:10:48.347040 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:10:48.347047 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:10:48.347053 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:10:48.347060 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:10:48.347067 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:10:48.347074 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:10:48.347080 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:10:48.347087 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:10:48.347094 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:10:48.347101 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:10:48.347107 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:10:48.347114 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:10:48.347121 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:10:48.347127 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:10:48.347134 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:10:48.347141 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:10:48.347148 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:10:48.347154 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:10:48.347161 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:10:48.347168 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:10:48.347175 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:10:48.347181 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:10:48.347188 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:10:48.347195 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:10:48.347206 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:10:48.347213 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:10:48.347220 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:10:48.347227 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:10:48.347233 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:10:48.347240 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:10:48.347247 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:10:48.347254 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:10:48.347282 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:10:48.347291 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:10:48.347297 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-19 07:10:48.347304 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:10:48.347311 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-19 07:10:48.347318 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-19 07:10:48.347324 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-19 07:10:48.347331 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:10:48.347338 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-19 07:10:48.347345 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:10:48.347352 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-19 07:10:48.347358 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-19 07:10:48.347365 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-19 07:10:48.347372 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-19 07:10:48.347379 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-19 07:10:48.347385 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-19 07:10:48.347392 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-19 07:10:48.347399 | orchestrator | 2025-09-19 07:10:48.347406 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-19 07:10:48.347412 | orchestrator | Friday 19 September 2025 07:02:53 +0000 (0:00:06.903) 0:02:42.528 ****** 2025-09-19 07:10:48.347419 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.347426 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.347433 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.347440 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.347446 | orchestrator | 2025-09-19 07:10:48.347453 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-19 07:10:48.347463 | orchestrator | Friday 19 September 2025 07:02:54 +0000 (0:00:00.916) 0:02:43.444 ****** 2025-09-19 07:10:48.347471 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.347478 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.347484 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.347491 | orchestrator | 2025-09-19 07:10:48.347498 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-19 07:10:48.347505 | orchestrator | Friday 19 September 2025 07:02:54 +0000 (0:00:00.610) 0:02:44.055 ****** 2025-09-19 07:10:48.347515 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.347522 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.347528 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.347535 | orchestrator | 2025-09-19 07:10:48.347541 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-19 07:10:48.347547 | orchestrator | Friday 19 September 2025 07:02:55 +0000 (0:00:01.130) 0:02:45.185 ****** 2025-09-19 07:10:48.347553 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.347560 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.347566 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.347572 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.347579 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.347585 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.347591 | orchestrator | 2025-09-19 07:10:48.347598 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-19 07:10:48.347604 | orchestrator | Friday 19 September 2025 07:02:56 +0000 (0:00:00.715) 0:02:45.901 ****** 2025-09-19 07:10:48.347610 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.347617 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.347623 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.347629 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.347636 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.347642 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.347648 | orchestrator | 2025-09-19 07:10:48.347655 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-19 07:10:48.347661 | orchestrator | Friday 19 September 2025 07:02:57 +0000 (0:00:00.533) 0:02:46.434 ****** 2025-09-19 07:10:48.347667 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.347673 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.347680 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.347686 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.347692 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.347698 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.347705 | orchestrator | 2025-09-19 07:10:48.347711 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-19 07:10:48.347717 | orchestrator | Friday 19 September 2025 07:02:57 +0000 (0:00:00.710) 0:02:47.144 ****** 2025-09-19 07:10:48.347724 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.347730 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.347753 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.347761 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.347767 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.347773 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.347779 | orchestrator | 2025-09-19 07:10:48.347786 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-19 07:10:48.347792 | orchestrator | Friday 19 September 2025 07:02:58 +0000 (0:00:00.523) 0:02:47.668 ****** 2025-09-19 07:10:48.347799 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.347805 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.347811 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.347817 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.347824 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.347830 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.347836 | orchestrator | 2025-09-19 07:10:48.347843 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-19 07:10:48.347849 | orchestrator | Friday 19 September 2025 07:02:58 +0000 (0:00:00.719) 0:02:48.387 ****** 2025-09-19 07:10:48.347856 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.347866 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.347873 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.347879 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.347885 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.347892 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.347898 | orchestrator | 2025-09-19 07:10:48.347904 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-19 07:10:48.347911 | orchestrator | Friday 19 September 2025 07:02:59 +0000 (0:00:00.641) 0:02:49.028 ****** 2025-09-19 07:10:48.347917 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.347923 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.347930 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.347936 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.347942 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.347961 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.347968 | orchestrator | 2025-09-19 07:10:48.347974 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-19 07:10:48.347980 | orchestrator | Friday 19 September 2025 07:03:00 +0000 (0:00:00.794) 0:02:49.822 ****** 2025-09-19 07:10:48.347987 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.347993 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348003 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348010 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.348016 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.348022 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.348029 | orchestrator | 2025-09-19 07:10:48.348035 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-19 07:10:48.348041 | orchestrator | Friday 19 September 2025 07:03:01 +0000 (0:00:00.667) 0:02:50.489 ****** 2025-09-19 07:10:48.348048 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348054 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348061 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348067 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.348074 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.348080 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.348086 | orchestrator | 2025-09-19 07:10:48.348093 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-19 07:10:48.348099 | orchestrator | Friday 19 September 2025 07:03:04 +0000 (0:00:03.149) 0:02:53.639 ****** 2025-09-19 07:10:48.348106 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348112 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348118 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348125 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.348131 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.348137 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.348144 | orchestrator | 2025-09-19 07:10:48.348150 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-19 07:10:48.348157 | orchestrator | Friday 19 September 2025 07:03:04 +0000 (0:00:00.575) 0:02:54.215 ****** 2025-09-19 07:10:48.348163 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348169 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348176 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348182 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.348188 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.348195 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.348201 | orchestrator | 2025-09-19 07:10:48.348208 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-19 07:10:48.348214 | orchestrator | Friday 19 September 2025 07:03:05 +0000 (0:00:00.634) 0:02:54.849 ****** 2025-09-19 07:10:48.348220 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348227 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348233 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348244 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.348250 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.348256 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.348263 | orchestrator | 2025-09-19 07:10:48.348269 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-19 07:10:48.348275 | orchestrator | Friday 19 September 2025 07:03:05 +0000 (0:00:00.541) 0:02:55.391 ****** 2025-09-19 07:10:48.348282 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348288 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348294 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348301 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.348307 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.348314 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.348321 | orchestrator | 2025-09-19 07:10:48.348327 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-19 07:10:48.348351 | orchestrator | Friday 19 September 2025 07:03:06 +0000 (0:00:00.756) 0:02:56.147 ****** 2025-09-19 07:10:48.348359 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348365 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348371 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348378 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-19 07:10:48.348387 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-19 07:10:48.348395 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.348401 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-19 07:10:48.348408 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-19 07:10:48.348418 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-19 07:10:48.348425 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-19 07:10:48.348432 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.348438 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.348444 | orchestrator | 2025-09-19 07:10:48.348451 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-19 07:10:48.348457 | orchestrator | Friday 19 September 2025 07:03:07 +0000 (0:00:00.707) 0:02:56.855 ****** 2025-09-19 07:10:48.348468 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348474 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348480 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348487 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.348493 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.348499 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.348506 | orchestrator | 2025-09-19 07:10:48.348512 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-19 07:10:48.348518 | orchestrator | Friday 19 September 2025 07:03:08 +0000 (0:00:00.835) 0:02:57.690 ****** 2025-09-19 07:10:48.348525 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348531 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348537 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348543 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.348550 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.348556 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.348562 | orchestrator | 2025-09-19 07:10:48.348569 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 07:10:48.348575 | orchestrator | Friday 19 September 2025 07:03:08 +0000 (0:00:00.539) 0:02:58.230 ****** 2025-09-19 07:10:48.348582 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348588 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348594 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348600 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.348607 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.348613 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.348619 | orchestrator | 2025-09-19 07:10:48.348625 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 07:10:48.348632 | orchestrator | Friday 19 September 2025 07:03:09 +0000 (0:00:00.829) 0:02:59.059 ****** 2025-09-19 07:10:48.348638 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348644 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348651 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348657 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.348663 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.348669 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.348676 | orchestrator | 2025-09-19 07:10:48.348682 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 07:10:48.348689 | orchestrator | Friday 19 September 2025 07:03:10 +0000 (0:00:00.577) 0:02:59.637 ****** 2025-09-19 07:10:48.348695 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348701 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348708 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348731 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.348738 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.348744 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.348750 | orchestrator | 2025-09-19 07:10:48.348757 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 07:10:48.348763 | orchestrator | Friday 19 September 2025 07:03:11 +0000 (0:00:00.931) 0:03:00.569 ****** 2025-09-19 07:10:48.348769 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348776 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348782 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.348788 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.348795 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.348801 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.348808 | orchestrator | 2025-09-19 07:10:48.348814 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 07:10:48.348820 | orchestrator | Friday 19 September 2025 07:03:11 +0000 (0:00:00.753) 0:03:01.322 ****** 2025-09-19 07:10:48.348827 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 07:10:48.348833 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 07:10:48.348846 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 07:10:48.348852 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348858 | orchestrator | 2025-09-19 07:10:48.348865 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 07:10:48.348871 | orchestrator | Friday 19 September 2025 07:03:12 +0000 (0:00:00.650) 0:03:01.972 ****** 2025-09-19 07:10:48.348877 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 07:10:48.348883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 07:10:48.348890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 07:10:48.348896 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348902 | orchestrator | 2025-09-19 07:10:48.348909 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 07:10:48.348915 | orchestrator | Friday 19 September 2025 07:03:13 +0000 (0:00:00.741) 0:03:02.714 ****** 2025-09-19 07:10:48.348921 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 07:10:48.348927 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 07:10:48.348934 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 07:10:48.348940 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348958 | orchestrator | 2025-09-19 07:10:48.348968 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 07:10:48.348974 | orchestrator | Friday 19 September 2025 07:03:14 +0000 (0:00:00.975) 0:03:03.689 ****** 2025-09-19 07:10:48.348981 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.348987 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.348993 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.349000 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.349006 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.349012 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.349019 | orchestrator | 2025-09-19 07:10:48.349025 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 07:10:48.349031 | orchestrator | Friday 19 September 2025 07:03:14 +0000 (0:00:00.721) 0:03:04.410 ****** 2025-09-19 07:10:48.349038 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-19 07:10:48.349044 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.349050 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-19 07:10:48.349057 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.349063 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-19 07:10:48.349069 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.349076 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 07:10:48.349082 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 07:10:48.349088 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 07:10:48.349095 | orchestrator | 2025-09-19 07:10:48.349101 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-19 07:10:48.349107 | orchestrator | Friday 19 September 2025 07:03:17 +0000 (0:00:02.596) 0:03:07.007 ****** 2025-09-19 07:10:48.349113 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.349120 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.349126 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.349132 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.349138 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.349145 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.349151 | orchestrator | 2025-09-19 07:10:48.349157 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:10:48.349163 | orchestrator | Friday 19 September 2025 07:03:20 +0000 (0:00:03.157) 0:03:10.164 ****** 2025-09-19 07:10:48.349170 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.349176 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.349182 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.349188 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.349199 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.349205 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.349211 | orchestrator | 2025-09-19 07:10:48.349218 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 07:10:48.349224 | orchestrator | Friday 19 September 2025 07:03:21 +0000 (0:00:01.076) 0:03:11.241 ****** 2025-09-19 07:10:48.349230 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349236 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.349243 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.349249 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.349255 | orchestrator | 2025-09-19 07:10:48.349262 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 07:10:48.349268 | orchestrator | Friday 19 September 2025 07:03:22 +0000 (0:00:01.187) 0:03:12.428 ****** 2025-09-19 07:10:48.349274 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.349280 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.349287 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.349293 | orchestrator | 2025-09-19 07:10:48.349299 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 07:10:48.349325 | orchestrator | Friday 19 September 2025 07:03:23 +0000 (0:00:00.454) 0:03:12.883 ****** 2025-09-19 07:10:48.349332 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.349338 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.349345 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.349351 | orchestrator | 2025-09-19 07:10:48.349357 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 07:10:48.349364 | orchestrator | Friday 19 September 2025 07:03:24 +0000 (0:00:01.386) 0:03:14.269 ****** 2025-09-19 07:10:48.349370 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:10:48.349376 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:10:48.349383 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:10:48.349389 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.349395 | orchestrator | 2025-09-19 07:10:48.349402 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 07:10:48.349408 | orchestrator | Friday 19 September 2025 07:03:25 +0000 (0:00:01.076) 0:03:15.346 ****** 2025-09-19 07:10:48.349414 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.349421 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.349427 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.349433 | orchestrator | 2025-09-19 07:10:48.349440 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 07:10:48.349446 | orchestrator | Friday 19 September 2025 07:03:26 +0000 (0:00:00.570) 0:03:15.916 ****** 2025-09-19 07:10:48.349452 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.349459 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.349465 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.349471 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.349478 | orchestrator | 2025-09-19 07:10:48.349484 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 07:10:48.349491 | orchestrator | Friday 19 September 2025 07:03:27 +0000 (0:00:00.899) 0:03:16.816 ****** 2025-09-19 07:10:48.349497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.349503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.349510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.349516 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349522 | orchestrator | 2025-09-19 07:10:48.349531 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 07:10:48.349538 | orchestrator | Friday 19 September 2025 07:03:28 +0000 (0:00:00.800) 0:03:17.616 ****** 2025-09-19 07:10:48.349548 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349555 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.349561 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.349567 | orchestrator | 2025-09-19 07:10:48.349574 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 07:10:48.349580 | orchestrator | Friday 19 September 2025 07:03:28 +0000 (0:00:00.593) 0:03:18.209 ****** 2025-09-19 07:10:48.349586 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349593 | orchestrator | 2025-09-19 07:10:48.349599 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 07:10:48.349605 | orchestrator | Friday 19 September 2025 07:03:29 +0000 (0:00:00.309) 0:03:18.518 ****** 2025-09-19 07:10:48.349612 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349618 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.349624 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.349631 | orchestrator | 2025-09-19 07:10:48.349637 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 07:10:48.349643 | orchestrator | Friday 19 September 2025 07:03:29 +0000 (0:00:00.425) 0:03:18.944 ****** 2025-09-19 07:10:48.349649 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349656 | orchestrator | 2025-09-19 07:10:48.349662 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 07:10:48.349668 | orchestrator | Friday 19 September 2025 07:03:29 +0000 (0:00:00.252) 0:03:19.197 ****** 2025-09-19 07:10:48.349674 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349680 | orchestrator | 2025-09-19 07:10:48.349687 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 07:10:48.349693 | orchestrator | Friday 19 September 2025 07:03:29 +0000 (0:00:00.227) 0:03:19.424 ****** 2025-09-19 07:10:48.349699 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349705 | orchestrator | 2025-09-19 07:10:48.349712 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 07:10:48.349718 | orchestrator | Friday 19 September 2025 07:03:30 +0000 (0:00:00.116) 0:03:19.541 ****** 2025-09-19 07:10:48.349724 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349730 | orchestrator | 2025-09-19 07:10:48.349737 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 07:10:48.349743 | orchestrator | Friday 19 September 2025 07:03:30 +0000 (0:00:00.254) 0:03:19.796 ****** 2025-09-19 07:10:48.349749 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349756 | orchestrator | 2025-09-19 07:10:48.349762 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 07:10:48.349768 | orchestrator | Friday 19 September 2025 07:03:30 +0000 (0:00:00.231) 0:03:20.027 ****** 2025-09-19 07:10:48.349774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.349781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.349787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.349793 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349800 | orchestrator | 2025-09-19 07:10:48.349806 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 07:10:48.349812 | orchestrator | Friday 19 September 2025 07:03:31 +0000 (0:00:01.013) 0:03:21.041 ****** 2025-09-19 07:10:48.349818 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.349825 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.349831 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349837 | orchestrator | 2025-09-19 07:10:48.349860 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 07:10:48.349867 | orchestrator | Friday 19 September 2025 07:03:32 +0000 (0:00:00.419) 0:03:21.461 ****** 2025-09-19 07:10:48.349874 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349880 | orchestrator | 2025-09-19 07:10:48.349886 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 07:10:48.349897 | orchestrator | Friday 19 September 2025 07:03:32 +0000 (0:00:00.232) 0:03:21.693 ****** 2025-09-19 07:10:48.349903 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.349910 | orchestrator | 2025-09-19 07:10:48.349916 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 07:10:48.349922 | orchestrator | Friday 19 September 2025 07:03:32 +0000 (0:00:00.229) 0:03:21.922 ****** 2025-09-19 07:10:48.349929 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.349935 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.349941 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.349959 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.349966 | orchestrator | 2025-09-19 07:10:48.349972 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 07:10:48.349978 | orchestrator | Friday 19 September 2025 07:03:33 +0000 (0:00:01.432) 0:03:23.355 ****** 2025-09-19 07:10:48.349985 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.349991 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.349998 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.350004 | orchestrator | 2025-09-19 07:10:48.350010 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 07:10:48.350035 | orchestrator | Friday 19 September 2025 07:03:34 +0000 (0:00:00.462) 0:03:23.818 ****** 2025-09-19 07:10:48.350042 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.350048 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.350054 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.350061 | orchestrator | 2025-09-19 07:10:48.350067 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 07:10:48.350074 | orchestrator | Friday 19 September 2025 07:03:35 +0000 (0:00:01.190) 0:03:25.009 ****** 2025-09-19 07:10:48.350080 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.350086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.350093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.350102 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.350109 | orchestrator | 2025-09-19 07:10:48.350115 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 07:10:48.350121 | orchestrator | Friday 19 September 2025 07:03:36 +0000 (0:00:00.839) 0:03:25.848 ****** 2025-09-19 07:10:48.350128 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.350134 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.350140 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.350147 | orchestrator | 2025-09-19 07:10:48.350153 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 07:10:48.350160 | orchestrator | Friday 19 September 2025 07:03:36 +0000 (0:00:00.425) 0:03:26.273 ****** 2025-09-19 07:10:48.350166 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.350173 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.350179 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.350185 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.350192 | orchestrator | 2025-09-19 07:10:48.350198 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 07:10:48.350205 | orchestrator | Friday 19 September 2025 07:03:38 +0000 (0:00:01.233) 0:03:27.506 ****** 2025-09-19 07:10:48.350211 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.350218 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.350224 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.350230 | orchestrator | 2025-09-19 07:10:48.350237 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 07:10:48.350243 | orchestrator | Friday 19 September 2025 07:03:38 +0000 (0:00:00.433) 0:03:27.940 ****** 2025-09-19 07:10:48.350249 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.350256 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.350267 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.350273 | orchestrator | 2025-09-19 07:10:48.350280 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 07:10:48.350286 | orchestrator | Friday 19 September 2025 07:03:39 +0000 (0:00:01.408) 0:03:29.349 ****** 2025-09-19 07:10:48.350292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.350299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.350305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.350312 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.350318 | orchestrator | 2025-09-19 07:10:48.350324 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 07:10:48.350331 | orchestrator | Friday 19 September 2025 07:03:40 +0000 (0:00:00.491) 0:03:29.841 ****** 2025-09-19 07:10:48.350337 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.350344 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.350350 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.350356 | orchestrator | 2025-09-19 07:10:48.350363 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-19 07:10:48.350369 | orchestrator | Friday 19 September 2025 07:03:40 +0000 (0:00:00.274) 0:03:30.116 ****** 2025-09-19 07:10:48.350375 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.350382 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.350388 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.350394 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.350401 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.350407 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.350413 | orchestrator | 2025-09-19 07:10:48.350420 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 07:10:48.350426 | orchestrator | Friday 19 September 2025 07:03:41 +0000 (0:00:00.697) 0:03:30.813 ****** 2025-09-19 07:10:48.350451 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.350459 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.350465 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.350472 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.350478 | orchestrator | 2025-09-19 07:10:48.350485 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 07:10:48.350491 | orchestrator | Friday 19 September 2025 07:03:42 +0000 (0:00:00.690) 0:03:31.504 ****** 2025-09-19 07:10:48.350498 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.350504 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.350511 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.350517 | orchestrator | 2025-09-19 07:10:48.350524 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 07:10:48.350530 | orchestrator | Friday 19 September 2025 07:03:42 +0000 (0:00:00.400) 0:03:31.904 ****** 2025-09-19 07:10:48.350537 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.350543 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.350549 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.350556 | orchestrator | 2025-09-19 07:10:48.350562 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 07:10:48.350569 | orchestrator | Friday 19 September 2025 07:03:43 +0000 (0:00:01.164) 0:03:33.069 ****** 2025-09-19 07:10:48.350575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:10:48.350581 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:10:48.350588 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:10:48.350594 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.350600 | orchestrator | 2025-09-19 07:10:48.350607 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 07:10:48.350613 | orchestrator | Friday 19 September 2025 07:03:44 +0000 (0:00:00.534) 0:03:33.603 ****** 2025-09-19 07:10:48.350623 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.350630 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.350636 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.350642 | orchestrator | 2025-09-19 07:10:48.350649 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-19 07:10:48.350655 | orchestrator | 2025-09-19 07:10:48.350661 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:10:48.350671 | orchestrator | Friday 19 September 2025 07:03:44 +0000 (0:00:00.561) 0:03:34.165 ****** 2025-09-19 07:10:48.350678 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.350684 | orchestrator | 2025-09-19 07:10:48.350690 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:10:48.350697 | orchestrator | Friday 19 September 2025 07:03:45 +0000 (0:00:00.612) 0:03:34.778 ****** 2025-09-19 07:10:48.350703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.350709 | orchestrator | 2025-09-19 07:10:48.350716 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:10:48.350722 | orchestrator | Friday 19 September 2025 07:03:45 +0000 (0:00:00.513) 0:03:35.291 ****** 2025-09-19 07:10:48.350729 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.350735 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.350741 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.350748 | orchestrator | 2025-09-19 07:10:48.350754 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:10:48.350761 | orchestrator | Friday 19 September 2025 07:03:46 +0000 (0:00:00.788) 0:03:36.079 ****** 2025-09-19 07:10:48.350767 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.350773 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.350779 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.350786 | orchestrator | 2025-09-19 07:10:48.350792 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:10:48.350799 | orchestrator | Friday 19 September 2025 07:03:46 +0000 (0:00:00.296) 0:03:36.376 ****** 2025-09-19 07:10:48.350805 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.350811 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.350818 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.350824 | orchestrator | 2025-09-19 07:10:48.350830 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:10:48.350837 | orchestrator | Friday 19 September 2025 07:03:47 +0000 (0:00:00.301) 0:03:36.677 ****** 2025-09-19 07:10:48.350843 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.350850 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.350856 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.350862 | orchestrator | 2025-09-19 07:10:48.350869 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:10:48.350875 | orchestrator | Friday 19 September 2025 07:03:47 +0000 (0:00:00.269) 0:03:36.947 ****** 2025-09-19 07:10:48.350881 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.350888 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.350894 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.350900 | orchestrator | 2025-09-19 07:10:48.350907 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:10:48.350913 | orchestrator | Friday 19 September 2025 07:03:48 +0000 (0:00:00.974) 0:03:37.921 ****** 2025-09-19 07:10:48.350919 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.350926 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.350932 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.350938 | orchestrator | 2025-09-19 07:10:48.350974 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:10:48.350981 | orchestrator | Friday 19 September 2025 07:03:48 +0000 (0:00:00.288) 0:03:38.209 ****** 2025-09-19 07:10:48.350988 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.351001 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.351007 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.351014 | orchestrator | 2025-09-19 07:10:48.351020 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:10:48.351047 | orchestrator | Friday 19 September 2025 07:03:49 +0000 (0:00:00.285) 0:03:38.495 ****** 2025-09-19 07:10:48.351054 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.351061 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351067 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.351073 | orchestrator | 2025-09-19 07:10:48.351080 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:10:48.351086 | orchestrator | Friday 19 September 2025 07:03:49 +0000 (0:00:00.796) 0:03:39.292 ****** 2025-09-19 07:10:48.351092 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351099 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.351105 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.351112 | orchestrator | 2025-09-19 07:10:48.351118 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:10:48.351124 | orchestrator | Friday 19 September 2025 07:03:50 +0000 (0:00:01.074) 0:03:40.367 ****** 2025-09-19 07:10:48.351131 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.351137 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.351143 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.351150 | orchestrator | 2025-09-19 07:10:48.351156 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:10:48.351163 | orchestrator | Friday 19 September 2025 07:03:51 +0000 (0:00:00.388) 0:03:40.755 ****** 2025-09-19 07:10:48.351169 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351175 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.351181 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.351188 | orchestrator | 2025-09-19 07:10:48.351194 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:10:48.351200 | orchestrator | Friday 19 September 2025 07:03:51 +0000 (0:00:00.465) 0:03:41.221 ****** 2025-09-19 07:10:48.351207 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.351213 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.351303 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.351310 | orchestrator | 2025-09-19 07:10:48.351316 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:10:48.351322 | orchestrator | Friday 19 September 2025 07:03:52 +0000 (0:00:00.456) 0:03:41.678 ****** 2025-09-19 07:10:48.351327 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.351333 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.351338 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.351344 | orchestrator | 2025-09-19 07:10:48.351349 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:10:48.351358 | orchestrator | Friday 19 September 2025 07:03:52 +0000 (0:00:00.603) 0:03:42.281 ****** 2025-09-19 07:10:48.351364 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.351370 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.351375 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.351380 | orchestrator | 2025-09-19 07:10:48.351386 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:10:48.351392 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.394) 0:03:42.675 ****** 2025-09-19 07:10:48.351397 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.351403 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.351408 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.351414 | orchestrator | 2025-09-19 07:10:48.351419 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:10:48.351425 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.376) 0:03:43.052 ****** 2025-09-19 07:10:48.351430 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.351436 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.351447 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.351452 | orchestrator | 2025-09-19 07:10:48.351458 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:10:48.351463 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.324) 0:03:43.376 ****** 2025-09-19 07:10:48.351469 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351474 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.351480 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.351485 | orchestrator | 2025-09-19 07:10:48.351491 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:10:48.351497 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:00.649) 0:03:44.025 ****** 2025-09-19 07:10:48.351502 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351508 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.351513 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.351519 | orchestrator | 2025-09-19 07:10:48.351524 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:10:48.351530 | orchestrator | Friday 19 September 2025 07:03:55 +0000 (0:00:00.457) 0:03:44.482 ****** 2025-09-19 07:10:48.351535 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351541 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.351546 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.351552 | orchestrator | 2025-09-19 07:10:48.351557 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-19 07:10:48.351563 | orchestrator | Friday 19 September 2025 07:03:55 +0000 (0:00:00.680) 0:03:45.163 ****** 2025-09-19 07:10:48.351568 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351574 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.351579 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.351585 | orchestrator | 2025-09-19 07:10:48.351590 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-19 07:10:48.351596 | orchestrator | Friday 19 September 2025 07:03:56 +0000 (0:00:00.380) 0:03:45.544 ****** 2025-09-19 07:10:48.351601 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.351607 | orchestrator | 2025-09-19 07:10:48.351613 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-19 07:10:48.351618 | orchestrator | Friday 19 September 2025 07:03:57 +0000 (0:00:01.139) 0:03:46.683 ****** 2025-09-19 07:10:48.351624 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.351629 | orchestrator | 2025-09-19 07:10:48.351635 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-19 07:10:48.351640 | orchestrator | Friday 19 September 2025 07:03:57 +0000 (0:00:00.151) 0:03:46.835 ****** 2025-09-19 07:10:48.351646 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-19 07:10:48.351651 | orchestrator | 2025-09-19 07:10:48.351677 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-19 07:10:48.351684 | orchestrator | Friday 19 September 2025 07:03:58 +0000 (0:00:01.120) 0:03:47.955 ****** 2025-09-19 07:10:48.351690 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351695 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.351701 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.351706 | orchestrator | 2025-09-19 07:10:48.351712 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-19 07:10:48.351718 | orchestrator | Friday 19 September 2025 07:03:58 +0000 (0:00:00.421) 0:03:48.377 ****** 2025-09-19 07:10:48.351723 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351729 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.351734 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.351740 | orchestrator | 2025-09-19 07:10:48.351745 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-19 07:10:48.351751 | orchestrator | Friday 19 September 2025 07:03:59 +0000 (0:00:00.734) 0:03:49.111 ****** 2025-09-19 07:10:48.351756 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.351762 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.351771 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.351777 | orchestrator | 2025-09-19 07:10:48.351783 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-19 07:10:48.351788 | orchestrator | Friday 19 September 2025 07:04:00 +0000 (0:00:01.312) 0:03:50.424 ****** 2025-09-19 07:10:48.351794 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.351799 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.351805 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.351811 | orchestrator | 2025-09-19 07:10:48.351816 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-19 07:10:48.351822 | orchestrator | Friday 19 September 2025 07:04:02 +0000 (0:00:01.010) 0:03:51.435 ****** 2025-09-19 07:10:48.351827 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.351833 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.351838 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.351844 | orchestrator | 2025-09-19 07:10:48.351850 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-19 07:10:48.351855 | orchestrator | Friday 19 September 2025 07:04:02 +0000 (0:00:00.803) 0:03:52.238 ****** 2025-09-19 07:10:48.351861 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351866 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.351872 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.351878 | orchestrator | 2025-09-19 07:10:48.351883 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-19 07:10:48.351892 | orchestrator | Friday 19 September 2025 07:04:03 +0000 (0:00:00.966) 0:03:53.204 ****** 2025-09-19 07:10:48.351898 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.351904 | orchestrator | 2025-09-19 07:10:48.351909 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-19 07:10:48.351915 | orchestrator | Friday 19 September 2025 07:04:05 +0000 (0:00:01.356) 0:03:54.560 ****** 2025-09-19 07:10:48.351920 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.351926 | orchestrator | 2025-09-19 07:10:48.351932 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-19 07:10:48.351937 | orchestrator | Friday 19 September 2025 07:04:05 +0000 (0:00:00.735) 0:03:55.296 ****** 2025-09-19 07:10:48.351943 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:10:48.351959 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.351965 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.351971 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:10:48.351977 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-19 07:10:48.351982 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:10:48.351988 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:10:48.351994 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-19 07:10:48.351999 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:10:48.352005 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-19 07:10:48.352010 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-19 07:10:48.352016 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-19 07:10:48.352022 | orchestrator | 2025-09-19 07:10:48.352027 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-19 07:10:48.352033 | orchestrator | Friday 19 September 2025 07:04:09 +0000 (0:00:03.447) 0:03:58.744 ****** 2025-09-19 07:10:48.352038 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.352044 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.352050 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.352055 | orchestrator | 2025-09-19 07:10:48.352061 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-19 07:10:48.352066 | orchestrator | Friday 19 September 2025 07:04:10 +0000 (0:00:01.409) 0:04:00.154 ****** 2025-09-19 07:10:48.352078 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.352084 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.352090 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.352095 | orchestrator | 2025-09-19 07:10:48.352101 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-19 07:10:48.352107 | orchestrator | Friday 19 September 2025 07:04:11 +0000 (0:00:00.571) 0:04:00.725 ****** 2025-09-19 07:10:48.352112 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.352118 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.352123 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.352129 | orchestrator | 2025-09-19 07:10:48.352135 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-19 07:10:48.352140 | orchestrator | Friday 19 September 2025 07:04:11 +0000 (0:00:00.326) 0:04:01.052 ****** 2025-09-19 07:10:48.352146 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.352151 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.352157 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.352163 | orchestrator | 2025-09-19 07:10:48.352168 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-19 07:10:48.352191 | orchestrator | Friday 19 September 2025 07:04:13 +0000 (0:00:01.504) 0:04:02.557 ****** 2025-09-19 07:10:48.352197 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.352203 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.352208 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.352214 | orchestrator | 2025-09-19 07:10:48.352219 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-19 07:10:48.352225 | orchestrator | Friday 19 September 2025 07:04:14 +0000 (0:00:01.386) 0:04:03.943 ****** 2025-09-19 07:10:48.352231 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.352236 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.352242 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.352248 | orchestrator | 2025-09-19 07:10:48.352253 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-19 07:10:48.352259 | orchestrator | Friday 19 September 2025 07:04:15 +0000 (0:00:00.498) 0:04:04.441 ****** 2025-09-19 07:10:48.352264 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.352270 | orchestrator | 2025-09-19 07:10:48.352275 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-19 07:10:48.352281 | orchestrator | Friday 19 September 2025 07:04:15 +0000 (0:00:00.574) 0:04:05.016 ****** 2025-09-19 07:10:48.352286 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.352292 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.352298 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.352303 | orchestrator | 2025-09-19 07:10:48.352309 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-19 07:10:48.352314 | orchestrator | Friday 19 September 2025 07:04:16 +0000 (0:00:00.432) 0:04:05.449 ****** 2025-09-19 07:10:48.352320 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.352325 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.352331 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.352336 | orchestrator | 2025-09-19 07:10:48.352342 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-19 07:10:48.352347 | orchestrator | Friday 19 September 2025 07:04:16 +0000 (0:00:00.304) 0:04:05.753 ****** 2025-09-19 07:10:48.352353 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.352358 | orchestrator | 2025-09-19 07:10:48.352364 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-19 07:10:48.352372 | orchestrator | Friday 19 September 2025 07:04:16 +0000 (0:00:00.646) 0:04:06.399 ****** 2025-09-19 07:10:48.352378 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.352384 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.352389 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.352399 | orchestrator | 2025-09-19 07:10:48.352404 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-19 07:10:48.352410 | orchestrator | Friday 19 September 2025 07:04:18 +0000 (0:00:01.795) 0:04:08.195 ****** 2025-09-19 07:10:48.352415 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.352421 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.352426 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.352432 | orchestrator | 2025-09-19 07:10:48.352437 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-19 07:10:48.352443 | orchestrator | Friday 19 September 2025 07:04:19 +0000 (0:00:01.183) 0:04:09.379 ****** 2025-09-19 07:10:48.352448 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.352454 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.352459 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.352465 | orchestrator | 2025-09-19 07:10:48.352470 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-19 07:10:48.352476 | orchestrator | Friday 19 September 2025 07:04:21 +0000 (0:00:01.910) 0:04:11.289 ****** 2025-09-19 07:10:48.352481 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.352487 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.352492 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.352498 | orchestrator | 2025-09-19 07:10:48.352503 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-19 07:10:48.352509 | orchestrator | Friday 19 September 2025 07:04:23 +0000 (0:00:02.039) 0:04:13.328 ****** 2025-09-19 07:10:48.352514 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.352520 | orchestrator | 2025-09-19 07:10:48.352525 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-19 07:10:48.352531 | orchestrator | Friday 19 September 2025 07:04:24 +0000 (0:00:00.644) 0:04:13.972 ****** 2025-09-19 07:10:48.352537 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.352542 | orchestrator | 2025-09-19 07:10:48.352548 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-19 07:10:48.352553 | orchestrator | Friday 19 September 2025 07:04:26 +0000 (0:00:01.604) 0:04:15.577 ****** 2025-09-19 07:10:48.352559 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.352564 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.352570 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.352575 | orchestrator | 2025-09-19 07:10:48.352581 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-19 07:10:48.352586 | orchestrator | Friday 19 September 2025 07:04:36 +0000 (0:00:09.945) 0:04:25.522 ****** 2025-09-19 07:10:48.352592 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.352597 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.352603 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.352608 | orchestrator | 2025-09-19 07:10:48.352614 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-19 07:10:48.352619 | orchestrator | Friday 19 September 2025 07:04:36 +0000 (0:00:00.360) 0:04:25.883 ****** 2025-09-19 07:10:48.352642 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f7c6940094a32245e29e124aa3b8646337e9014'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-19 07:10:48.352650 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f7c6940094a32245e29e124aa3b8646337e9014'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-19 07:10:48.352657 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f7c6940094a32245e29e124aa3b8646337e9014'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-19 07:10:48.352667 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f7c6940094a32245e29e124aa3b8646337e9014'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-19 07:10:48.352673 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f7c6940094a32245e29e124aa3b8646337e9014'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-19 07:10:48.352683 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f7c6940094a32245e29e124aa3b8646337e9014'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__3f7c6940094a32245e29e124aa3b8646337e9014'}])  2025-09-19 07:10:48.352690 | orchestrator | 2025-09-19 07:10:48.352696 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:10:48.352701 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:14.644) 0:04:40.528 ****** 2025-09-19 07:10:48.352707 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.352712 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.352718 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.352723 | orchestrator | 2025-09-19 07:10:48.352729 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 07:10:48.352735 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:00.321) 0:04:40.849 ****** 2025-09-19 07:10:48.352740 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.352746 | orchestrator | 2025-09-19 07:10:48.352751 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 07:10:48.352757 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:00.458) 0:04:41.308 ****** 2025-09-19 07:10:48.352763 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.352768 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.352774 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.352779 | orchestrator | 2025-09-19 07:10:48.352785 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 07:10:48.352790 | orchestrator | Friday 19 September 2025 07:04:52 +0000 (0:00:00.439) 0:04:41.747 ****** 2025-09-19 07:10:48.352796 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.352801 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.352807 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.352813 | orchestrator | 2025-09-19 07:10:48.352818 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 07:10:48.352824 | orchestrator | Friday 19 September 2025 07:04:52 +0000 (0:00:00.331) 0:04:42.078 ****** 2025-09-19 07:10:48.352829 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:10:48.352835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:10:48.352840 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:10:48.352846 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.352851 | orchestrator | 2025-09-19 07:10:48.352857 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 07:10:48.352866 | orchestrator | Friday 19 September 2025 07:04:53 +0000 (0:00:00.552) 0:04:42.630 ****** 2025-09-19 07:10:48.352871 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.352877 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.352882 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.352888 | orchestrator | 2025-09-19 07:10:48.352893 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-19 07:10:48.352899 | orchestrator | 2025-09-19 07:10:48.352905 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:10:48.352910 | orchestrator | Friday 19 September 2025 07:04:53 +0000 (0:00:00.535) 0:04:43.165 ****** 2025-09-19 07:10:48.352931 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-09-19 07:10:48.352938 | orchestrator | 2025-09-19 07:10:48.352956 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:10:48.352963 | orchestrator | Friday 19 September 2025 07:04:54 +0000 (0:00:00.648) 0:04:43.814 ****** 2025-09-19 07:10:48.352968 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.352974 | orchestrator | 2025-09-19 07:10:48.352979 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:10:48.352985 | orchestrator | Friday 19 September 2025 07:04:54 +0000 (0:00:00.449) 0:04:44.264 ****** 2025-09-19 07:10:48.352990 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.352996 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.353001 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.353007 | orchestrator | 2025-09-19 07:10:48.353012 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:10:48.353018 | orchestrator | Friday 19 September 2025 07:04:55 +0000 (0:00:00.838) 0:04:45.103 ****** 2025-09-19 07:10:48.353023 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353029 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353034 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353040 | orchestrator | 2025-09-19 07:10:48.353045 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:10:48.353051 | orchestrator | Friday 19 September 2025 07:04:55 +0000 (0:00:00.275) 0:04:45.378 ****** 2025-09-19 07:10:48.353056 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353062 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353067 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353073 | orchestrator | 2025-09-19 07:10:48.353078 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:10:48.353084 | orchestrator | Friday 19 September 2025 07:04:56 +0000 (0:00:00.267) 0:04:45.645 ****** 2025-09-19 07:10:48.353089 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353095 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353100 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353106 | orchestrator | 2025-09-19 07:10:48.353111 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:10:48.353117 | orchestrator | Friday 19 September 2025 07:04:56 +0000 (0:00:00.289) 0:04:45.935 ****** 2025-09-19 07:10:48.353126 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.353132 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.353137 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.353143 | orchestrator | 2025-09-19 07:10:48.353148 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:10:48.353154 | orchestrator | Friday 19 September 2025 07:04:57 +0000 (0:00:00.833) 0:04:46.768 ****** 2025-09-19 07:10:48.353159 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353165 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353170 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353176 | orchestrator | 2025-09-19 07:10:48.353181 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:10:48.353187 | orchestrator | Friday 19 September 2025 07:04:57 +0000 (0:00:00.226) 0:04:46.994 ****** 2025-09-19 07:10:48.353196 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353202 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353207 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353213 | orchestrator | 2025-09-19 07:10:48.353218 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:10:48.353224 | orchestrator | Friday 19 September 2025 07:04:57 +0000 (0:00:00.236) 0:04:47.230 ****** 2025-09-19 07:10:48.353230 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.353235 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.353241 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.353246 | orchestrator | 2025-09-19 07:10:48.353252 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:10:48.353257 | orchestrator | Friday 19 September 2025 07:04:58 +0000 (0:00:00.624) 0:04:47.855 ****** 2025-09-19 07:10:48.353263 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.353268 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.353274 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.353280 | orchestrator | 2025-09-19 07:10:48.353285 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:10:48.353291 | orchestrator | Friday 19 September 2025 07:04:59 +0000 (0:00:00.811) 0:04:48.667 ****** 2025-09-19 07:10:48.353296 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353302 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353307 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353313 | orchestrator | 2025-09-19 07:10:48.353318 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:10:48.353324 | orchestrator | Friday 19 September 2025 07:04:59 +0000 (0:00:00.285) 0:04:48.952 ****** 2025-09-19 07:10:48.353329 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.353335 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.353340 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.353346 | orchestrator | 2025-09-19 07:10:48.353351 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:10:48.353357 | orchestrator | Friday 19 September 2025 07:04:59 +0000 (0:00:00.328) 0:04:49.281 ****** 2025-09-19 07:10:48.353362 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353368 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353373 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353379 | orchestrator | 2025-09-19 07:10:48.353384 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:10:48.353390 | orchestrator | Friday 19 September 2025 07:05:00 +0000 (0:00:00.367) 0:04:49.648 ****** 2025-09-19 07:10:48.353396 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353401 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353406 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353412 | orchestrator | 2025-09-19 07:10:48.353418 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:10:48.353423 | orchestrator | Friday 19 September 2025 07:05:00 +0000 (0:00:00.629) 0:04:50.277 ****** 2025-09-19 07:10:48.353445 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353452 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353457 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353463 | orchestrator | 2025-09-19 07:10:48.353468 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:10:48.353474 | orchestrator | Friday 19 September 2025 07:05:01 +0000 (0:00:00.318) 0:04:50.595 ****** 2025-09-19 07:10:48.353480 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353485 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353491 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353496 | orchestrator | 2025-09-19 07:10:48.353502 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:10:48.353507 | orchestrator | Friday 19 September 2025 07:05:01 +0000 (0:00:00.322) 0:04:50.918 ****** 2025-09-19 07:10:48.353516 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353522 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353527 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353533 | orchestrator | 2025-09-19 07:10:48.353538 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:10:48.353544 | orchestrator | Friday 19 September 2025 07:05:01 +0000 (0:00:00.294) 0:04:51.213 ****** 2025-09-19 07:10:48.353550 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.353555 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.353561 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.353566 | orchestrator | 2025-09-19 07:10:48.353572 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:10:48.353577 | orchestrator | Friday 19 September 2025 07:05:02 +0000 (0:00:00.623) 0:04:51.836 ****** 2025-09-19 07:10:48.353583 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.353588 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.353594 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.353599 | orchestrator | 2025-09-19 07:10:48.353605 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:10:48.353610 | orchestrator | Friday 19 September 2025 07:05:02 +0000 (0:00:00.338) 0:04:52.174 ****** 2025-09-19 07:10:48.353616 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.353622 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.353627 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.353633 | orchestrator | 2025-09-19 07:10:48.353638 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-19 07:10:48.353644 | orchestrator | Friday 19 September 2025 07:05:03 +0000 (0:00:00.655) 0:04:52.830 ****** 2025-09-19 07:10:48.353649 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:10:48.353658 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:10:48.353663 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:10:48.353669 | orchestrator | 2025-09-19 07:10:48.353675 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-19 07:10:48.353680 | orchestrator | Friday 19 September 2025 07:05:04 +0000 (0:00:00.912) 0:04:53.742 ****** 2025-09-19 07:10:48.353686 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.353691 | orchestrator | 2025-09-19 07:10:48.353697 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-19 07:10:48.353702 | orchestrator | Friday 19 September 2025 07:05:05 +0000 (0:00:00.805) 0:04:54.547 ****** 2025-09-19 07:10:48.353708 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.353714 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.353719 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.353725 | orchestrator | 2025-09-19 07:10:48.353730 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-19 07:10:48.353736 | orchestrator | Friday 19 September 2025 07:05:05 +0000 (0:00:00.752) 0:04:55.300 ****** 2025-09-19 07:10:48.353741 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.353747 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.353753 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.353758 | orchestrator | 2025-09-19 07:10:48.353764 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-19 07:10:48.353769 | orchestrator | Friday 19 September 2025 07:05:06 +0000 (0:00:00.351) 0:04:55.652 ****** 2025-09-19 07:10:48.353775 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:10:48.353781 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:10:48.353786 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:10:48.353792 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-19 07:10:48.353797 | orchestrator | 2025-09-19 07:10:48.353803 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-19 07:10:48.353812 | orchestrator | Friday 19 September 2025 07:05:16 +0000 (0:00:10.666) 0:05:06.318 ****** 2025-09-19 07:10:48.353818 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.353823 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.353829 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.353834 | orchestrator | 2025-09-19 07:10:48.353840 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-19 07:10:48.353846 | orchestrator | Friday 19 September 2025 07:05:17 +0000 (0:00:00.596) 0:05:06.914 ****** 2025-09-19 07:10:48.353851 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 07:10:48.353857 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 07:10:48.353862 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 07:10:48.353868 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 07:10:48.353873 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.353879 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.353884 | orchestrator | 2025-09-19 07:10:48.353890 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-19 07:10:48.353895 | orchestrator | Friday 19 September 2025 07:05:19 +0000 (0:00:02.283) 0:05:09.198 ****** 2025-09-19 07:10:48.353901 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 07:10:48.353907 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 07:10:48.353928 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 07:10:48.353934 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 07:10:48.353939 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:10:48.353958 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 07:10:48.353964 | orchestrator | 2025-09-19 07:10:48.353970 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-19 07:10:48.353976 | orchestrator | Friday 19 September 2025 07:05:21 +0000 (0:00:01.255) 0:05:10.453 ****** 2025-09-19 07:10:48.353981 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.353987 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.353993 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.353998 | orchestrator | 2025-09-19 07:10:48.354004 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-19 07:10:48.354009 | orchestrator | Friday 19 September 2025 07:05:21 +0000 (0:00:00.752) 0:05:11.206 ****** 2025-09-19 07:10:48.354033 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.354039 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.354045 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.354050 | orchestrator | 2025-09-19 07:10:48.354056 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-19 07:10:48.354062 | orchestrator | Friday 19 September 2025 07:05:22 +0000 (0:00:00.574) 0:05:11.781 ****** 2025-09-19 07:10:48.354067 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.354073 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.354078 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.354084 | orchestrator | 2025-09-19 07:10:48.354090 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-19 07:10:48.354095 | orchestrator | Friday 19 September 2025 07:05:22 +0000 (0:00:00.344) 0:05:12.125 ****** 2025-09-19 07:10:48.354101 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.354106 | orchestrator | 2025-09-19 07:10:48.354112 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-19 07:10:48.354118 | orchestrator | Friday 19 September 2025 07:05:23 +0000 (0:00:00.533) 0:05:12.659 ****** 2025-09-19 07:10:48.354123 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.354129 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.354134 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.354140 | orchestrator | 2025-09-19 07:10:48.354148 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-19 07:10:48.354159 | orchestrator | Friday 19 September 2025 07:05:23 +0000 (0:00:00.577) 0:05:13.237 ****** 2025-09-19 07:10:48.354165 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.354171 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.354176 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.354182 | orchestrator | 2025-09-19 07:10:48.354187 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-19 07:10:48.354193 | orchestrator | Friday 19 September 2025 07:05:24 +0000 (0:00:00.389) 0:05:13.626 ****** 2025-09-19 07:10:48.354198 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.354204 | orchestrator | 2025-09-19 07:10:48.354209 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-19 07:10:48.354215 | orchestrator | Friday 19 September 2025 07:05:24 +0000 (0:00:00.560) 0:05:14.187 ****** 2025-09-19 07:10:48.354221 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.354226 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.354232 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.354237 | orchestrator | 2025-09-19 07:10:48.354243 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-19 07:10:48.354248 | orchestrator | Friday 19 September 2025 07:05:26 +0000 (0:00:01.728) 0:05:15.915 ****** 2025-09-19 07:10:48.354254 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.354259 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.354265 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.354271 | orchestrator | 2025-09-19 07:10:48.354276 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-19 07:10:48.354282 | orchestrator | Friday 19 September 2025 07:05:27 +0000 (0:00:01.175) 0:05:17.091 ****** 2025-09-19 07:10:48.354287 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.354293 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.354298 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.354304 | orchestrator | 2025-09-19 07:10:48.354310 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-19 07:10:48.354315 | orchestrator | Friday 19 September 2025 07:05:29 +0000 (0:00:01.768) 0:05:18.860 ****** 2025-09-19 07:10:48.354321 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.354326 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.354332 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.354337 | orchestrator | 2025-09-19 07:10:48.354343 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-19 07:10:48.354349 | orchestrator | Friday 19 September 2025 07:05:31 +0000 (0:00:01.925) 0:05:20.785 ****** 2025-09-19 07:10:48.354354 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.354360 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.354365 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-19 07:10:48.354371 | orchestrator | 2025-09-19 07:10:48.354376 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-19 07:10:48.354382 | orchestrator | Friday 19 September 2025 07:05:32 +0000 (0:00:00.703) 0:05:21.489 ****** 2025-09-19 07:10:48.354387 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-19 07:10:48.354393 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-19 07:10:48.354399 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-19 07:10:48.354422 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-19 07:10:48.354428 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-19 07:10:48.354434 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:10:48.354443 | orchestrator | 2025-09-19 07:10:48.354449 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-19 07:10:48.354455 | orchestrator | Friday 19 September 2025 07:06:02 +0000 (0:00:30.201) 0:05:51.691 ****** 2025-09-19 07:10:48.354460 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:10:48.354466 | orchestrator | 2025-09-19 07:10:48.354472 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-19 07:10:48.354477 | orchestrator | Friday 19 September 2025 07:06:03 +0000 (0:00:01.332) 0:05:53.024 ****** 2025-09-19 07:10:48.354483 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.354488 | orchestrator | 2025-09-19 07:10:48.354494 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-19 07:10:48.354499 | orchestrator | Friday 19 September 2025 07:06:03 +0000 (0:00:00.316) 0:05:53.341 ****** 2025-09-19 07:10:48.354505 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.354510 | orchestrator | 2025-09-19 07:10:48.354516 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-19 07:10:48.354522 | orchestrator | Friday 19 September 2025 07:06:04 +0000 (0:00:00.160) 0:05:53.501 ****** 2025-09-19 07:10:48.354527 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-19 07:10:48.354533 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-19 07:10:48.354538 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-19 07:10:48.354544 | orchestrator | 2025-09-19 07:10:48.354549 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-19 07:10:48.354555 | orchestrator | Friday 19 September 2025 07:06:10 +0000 (0:00:06.548) 0:06:00.049 ****** 2025-09-19 07:10:48.354560 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-19 07:10:48.354566 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-19 07:10:48.354575 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-19 07:10:48.354580 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-19 07:10:48.354586 | orchestrator | 2025-09-19 07:10:48.354591 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:10:48.354597 | orchestrator | Friday 19 September 2025 07:06:16 +0000 (0:00:05.410) 0:06:05.460 ****** 2025-09-19 07:10:48.354603 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.354608 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.354614 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.354619 | orchestrator | 2025-09-19 07:10:48.354625 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 07:10:48.354630 | orchestrator | Friday 19 September 2025 07:06:16 +0000 (0:00:00.715) 0:06:06.175 ****** 2025-09-19 07:10:48.354636 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:48.354642 | orchestrator | 2025-09-19 07:10:48.354647 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 07:10:48.354652 | orchestrator | Friday 19 September 2025 07:06:17 +0000 (0:00:00.534) 0:06:06.710 ****** 2025-09-19 07:10:48.354658 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.354664 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.354669 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.354675 | orchestrator | 2025-09-19 07:10:48.354680 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 07:10:48.354686 | orchestrator | Friday 19 September 2025 07:06:17 +0000 (0:00:00.601) 0:06:07.311 ****** 2025-09-19 07:10:48.354691 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.354697 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.354702 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.354708 | orchestrator | 2025-09-19 07:10:48.354713 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 07:10:48.354724 | orchestrator | Friday 19 September 2025 07:06:19 +0000 (0:00:01.268) 0:06:08.579 ****** 2025-09-19 07:10:48.354730 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:10:48.354735 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:10:48.354741 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:10:48.354746 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.354752 | orchestrator | 2025-09-19 07:10:48.354757 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 07:10:48.354763 | orchestrator | Friday 19 September 2025 07:06:19 +0000 (0:00:00.581) 0:06:09.161 ****** 2025-09-19 07:10:48.354768 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.354774 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.354780 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.354785 | orchestrator | 2025-09-19 07:10:48.354791 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-19 07:10:48.354796 | orchestrator | 2025-09-19 07:10:48.354802 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:10:48.354807 | orchestrator | Friday 19 September 2025 07:06:20 +0000 (0:00:00.944) 0:06:10.105 ****** 2025-09-19 07:10:48.354813 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.354819 | orchestrator | 2025-09-19 07:10:48.354824 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:10:48.354830 | orchestrator | Friday 19 September 2025 07:06:21 +0000 (0:00:00.495) 0:06:10.601 ****** 2025-09-19 07:10:48.354852 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.354858 | orchestrator | 2025-09-19 07:10:48.354864 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:10:48.354869 | orchestrator | Friday 19 September 2025 07:06:21 +0000 (0:00:00.740) 0:06:11.341 ****** 2025-09-19 07:10:48.354875 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.354881 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.354886 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.354892 | orchestrator | 2025-09-19 07:10:48.354897 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:10:48.354903 | orchestrator | Friday 19 September 2025 07:06:22 +0000 (0:00:00.319) 0:06:11.660 ****** 2025-09-19 07:10:48.354908 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.354914 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.354919 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.354925 | orchestrator | 2025-09-19 07:10:48.354930 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:10:48.354936 | orchestrator | Friday 19 September 2025 07:06:22 +0000 (0:00:00.674) 0:06:12.334 ****** 2025-09-19 07:10:48.354941 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.354979 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.354985 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.354991 | orchestrator | 2025-09-19 07:10:48.354997 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:10:48.355002 | orchestrator | Friday 19 September 2025 07:06:23 +0000 (0:00:00.714) 0:06:13.048 ****** 2025-09-19 07:10:48.355008 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355013 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355019 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355024 | orchestrator | 2025-09-19 07:10:48.355030 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:10:48.355035 | orchestrator | Friday 19 September 2025 07:06:24 +0000 (0:00:00.915) 0:06:13.964 ****** 2025-09-19 07:10:48.355041 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355046 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355052 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355057 | orchestrator | 2025-09-19 07:10:48.355067 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:10:48.355073 | orchestrator | Friday 19 September 2025 07:06:24 +0000 (0:00:00.315) 0:06:14.280 ****** 2025-09-19 07:10:48.355078 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355084 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355093 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355098 | orchestrator | 2025-09-19 07:10:48.355104 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:10:48.355110 | orchestrator | Friday 19 September 2025 07:06:25 +0000 (0:00:00.304) 0:06:14.584 ****** 2025-09-19 07:10:48.355115 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355121 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355126 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355132 | orchestrator | 2025-09-19 07:10:48.355137 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:10:48.355143 | orchestrator | Friday 19 September 2025 07:06:25 +0000 (0:00:00.324) 0:06:14.908 ****** 2025-09-19 07:10:48.355148 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355154 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355159 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355165 | orchestrator | 2025-09-19 07:10:48.355170 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:10:48.355176 | orchestrator | Friday 19 September 2025 07:06:26 +0000 (0:00:00.952) 0:06:15.861 ****** 2025-09-19 07:10:48.355182 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355187 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355193 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355198 | orchestrator | 2025-09-19 07:10:48.355204 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:10:48.355209 | orchestrator | Friday 19 September 2025 07:06:27 +0000 (0:00:00.694) 0:06:16.555 ****** 2025-09-19 07:10:48.355215 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355220 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355226 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355231 | orchestrator | 2025-09-19 07:10:48.355237 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:10:48.355242 | orchestrator | Friday 19 September 2025 07:06:27 +0000 (0:00:00.360) 0:06:16.916 ****** 2025-09-19 07:10:48.355248 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355253 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355259 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355264 | orchestrator | 2025-09-19 07:10:48.355270 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:10:48.355276 | orchestrator | Friday 19 September 2025 07:06:27 +0000 (0:00:00.347) 0:06:17.263 ****** 2025-09-19 07:10:48.355281 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355287 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355292 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355298 | orchestrator | 2025-09-19 07:10:48.355303 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:10:48.355309 | orchestrator | Friday 19 September 2025 07:06:28 +0000 (0:00:00.609) 0:06:17.872 ****** 2025-09-19 07:10:48.355314 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355320 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355325 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355331 | orchestrator | 2025-09-19 07:10:48.355337 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:10:48.355342 | orchestrator | Friday 19 September 2025 07:06:28 +0000 (0:00:00.354) 0:06:18.226 ****** 2025-09-19 07:10:48.355348 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355353 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355359 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355364 | orchestrator | 2025-09-19 07:10:48.355370 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:10:48.355375 | orchestrator | Friday 19 September 2025 07:06:29 +0000 (0:00:00.339) 0:06:18.566 ****** 2025-09-19 07:10:48.355385 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355391 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355396 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355402 | orchestrator | 2025-09-19 07:10:48.355411 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:10:48.355417 | orchestrator | Friday 19 September 2025 07:06:29 +0000 (0:00:00.329) 0:06:18.895 ****** 2025-09-19 07:10:48.355422 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355428 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355433 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355439 | orchestrator | 2025-09-19 07:10:48.355444 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:10:48.355450 | orchestrator | Friday 19 September 2025 07:06:30 +0000 (0:00:00.598) 0:06:19.493 ****** 2025-09-19 07:10:48.355456 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355461 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355467 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355472 | orchestrator | 2025-09-19 07:10:48.355478 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:10:48.355483 | orchestrator | Friday 19 September 2025 07:06:30 +0000 (0:00:00.332) 0:06:19.825 ****** 2025-09-19 07:10:48.355489 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355494 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355500 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355505 | orchestrator | 2025-09-19 07:10:48.355510 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:10:48.355515 | orchestrator | Friday 19 September 2025 07:06:30 +0000 (0:00:00.351) 0:06:20.176 ****** 2025-09-19 07:10:48.355520 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355525 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355530 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355535 | orchestrator | 2025-09-19 07:10:48.355540 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-19 07:10:48.355545 | orchestrator | Friday 19 September 2025 07:06:31 +0000 (0:00:00.541) 0:06:20.718 ****** 2025-09-19 07:10:48.355550 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355555 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355560 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355565 | orchestrator | 2025-09-19 07:10:48.355569 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-19 07:10:48.355574 | orchestrator | Friday 19 September 2025 07:06:31 +0000 (0:00:00.457) 0:06:21.175 ****** 2025-09-19 07:10:48.355579 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:10:48.355587 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:10:48.355592 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:10:48.355597 | orchestrator | 2025-09-19 07:10:48.355602 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-19 07:10:48.355607 | orchestrator | Friday 19 September 2025 07:06:32 +0000 (0:00:00.557) 0:06:21.733 ****** 2025-09-19 07:10:48.355612 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.355617 | orchestrator | 2025-09-19 07:10:48.355622 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-19 07:10:48.355627 | orchestrator | Friday 19 September 2025 07:06:32 +0000 (0:00:00.450) 0:06:22.184 ****** 2025-09-19 07:10:48.355632 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355637 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355642 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355647 | orchestrator | 2025-09-19 07:10:48.355652 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-19 07:10:48.355657 | orchestrator | Friday 19 September 2025 07:06:33 +0000 (0:00:00.416) 0:06:22.600 ****** 2025-09-19 07:10:48.355665 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355670 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355675 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355680 | orchestrator | 2025-09-19 07:10:48.355685 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-19 07:10:48.355689 | orchestrator | Friday 19 September 2025 07:06:33 +0000 (0:00:00.290) 0:06:22.891 ****** 2025-09-19 07:10:48.355694 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355699 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355704 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355709 | orchestrator | 2025-09-19 07:10:48.355714 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-19 07:10:48.355719 | orchestrator | Friday 19 September 2025 07:06:34 +0000 (0:00:00.565) 0:06:23.457 ****** 2025-09-19 07:10:48.355724 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.355729 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.355734 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.355739 | orchestrator | 2025-09-19 07:10:48.355743 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-19 07:10:48.355748 | orchestrator | Friday 19 September 2025 07:06:34 +0000 (0:00:00.288) 0:06:23.745 ****** 2025-09-19 07:10:48.355753 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 07:10:48.355758 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 07:10:48.355763 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 07:10:48.355768 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 07:10:48.355773 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 07:10:48.355778 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 07:10:48.355783 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 07:10:48.355788 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 07:10:48.355797 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 07:10:48.355802 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 07:10:48.355807 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 07:10:48.355812 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 07:10:48.355817 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 07:10:48.355822 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 07:10:48.355827 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 07:10:48.355832 | orchestrator | 2025-09-19 07:10:48.355837 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-19 07:10:48.355842 | orchestrator | Friday 19 September 2025 07:06:37 +0000 (0:00:03.111) 0:06:26.857 ****** 2025-09-19 07:10:48.355847 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.355852 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.355857 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.355862 | orchestrator | 2025-09-19 07:10:48.355867 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-19 07:10:48.355872 | orchestrator | Friday 19 September 2025 07:06:37 +0000 (0:00:00.307) 0:06:27.164 ****** 2025-09-19 07:10:48.355877 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.355885 | orchestrator | 2025-09-19 07:10:48.355890 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-19 07:10:48.355895 | orchestrator | Friday 19 September 2025 07:06:38 +0000 (0:00:00.486) 0:06:27.651 ****** 2025-09-19 07:10:48.355900 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 07:10:48.355906 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 07:10:48.355911 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 07:10:48.355916 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-19 07:10:48.355921 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-19 07:10:48.355939 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-19 07:10:48.355954 | orchestrator | 2025-09-19 07:10:48.355960 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-19 07:10:48.355965 | orchestrator | Friday 19 September 2025 07:06:39 +0000 (0:00:01.078) 0:06:28.730 ****** 2025-09-19 07:10:48.355970 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.355975 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:10:48.355980 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:10:48.355985 | orchestrator | 2025-09-19 07:10:48.355990 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-19 07:10:48.355995 | orchestrator | Friday 19 September 2025 07:06:41 +0000 (0:00:01.886) 0:06:30.617 ****** 2025-09-19 07:10:48.356000 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:10:48.356005 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 07:10:48.356010 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.356015 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:10:48.356020 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:10:48.356025 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.356030 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:10:48.356035 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 07:10:48.356040 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.356045 | orchestrator | 2025-09-19 07:10:48.356050 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-19 07:10:48.356055 | orchestrator | Friday 19 September 2025 07:06:42 +0000 (0:00:01.426) 0:06:32.043 ****** 2025-09-19 07:10:48.356060 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:10:48.356065 | orchestrator | 2025-09-19 07:10:48.356070 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-19 07:10:48.356075 | orchestrator | Friday 19 September 2025 07:06:44 +0000 (0:00:01.883) 0:06:33.927 ****** 2025-09-19 07:10:48.356080 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.356085 | orchestrator | 2025-09-19 07:10:48.356090 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-19 07:10:48.356095 | orchestrator | Friday 19 September 2025 07:06:44 +0000 (0:00:00.496) 0:06:34.423 ****** 2025-09-19 07:10:48.356100 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2af2e838-b751-5a2f-ab09-cbc0dc745073', 'data_vg': 'ceph-2af2e838-b751-5a2f-ab09-cbc0dc745073'}) 2025-09-19 07:10:48.356106 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5631a8c0-2403-5b6d-b4ab-3f734fe52f75', 'data_vg': 'ceph-5631a8c0-2403-5b6d-b4ab-3f734fe52f75'}) 2025-09-19 07:10:48.356111 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-787edb9c-1668-5795-8146-b6ac8c49142c', 'data_vg': 'ceph-787edb9c-1668-5795-8146-b6ac8c49142c'}) 2025-09-19 07:10:48.356116 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-03228564-3151-5027-920d-737061be0eca', 'data_vg': 'ceph-03228564-3151-5027-920d-737061be0eca'}) 2025-09-19 07:10:48.356124 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-32fceb46-e08d-5445-84d6-a85b98e59ab0', 'data_vg': 'ceph-32fceb46-e08d-5445-84d6-a85b98e59ab0'}) 2025-09-19 07:10:48.356133 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-af475f18-71a6-5278-b018-36a08189cb1c', 'data_vg': 'ceph-af475f18-71a6-5278-b018-36a08189cb1c'}) 2025-09-19 07:10:48.356138 | orchestrator | 2025-09-19 07:10:48.356143 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-19 07:10:48.356148 | orchestrator | Friday 19 September 2025 07:07:32 +0000 (0:00:47.207) 0:07:21.631 ****** 2025-09-19 07:10:48.356153 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356158 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.356162 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.356167 | orchestrator | 2025-09-19 07:10:48.356172 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-19 07:10:48.356177 | orchestrator | Friday 19 September 2025 07:07:32 +0000 (0:00:00.309) 0:07:21.941 ****** 2025-09-19 07:10:48.356182 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.356187 | orchestrator | 2025-09-19 07:10:48.356192 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-19 07:10:48.356197 | orchestrator | Friday 19 September 2025 07:07:33 +0000 (0:00:00.518) 0:07:22.460 ****** 2025-09-19 07:10:48.356202 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.356207 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.356212 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.356217 | orchestrator | 2025-09-19 07:10:48.356222 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-19 07:10:48.356227 | orchestrator | Friday 19 September 2025 07:07:33 +0000 (0:00:00.927) 0:07:23.388 ****** 2025-09-19 07:10:48.356232 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.356237 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.356242 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.356247 | orchestrator | 2025-09-19 07:10:48.356252 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-19 07:10:48.356257 | orchestrator | Friday 19 September 2025 07:07:36 +0000 (0:00:02.435) 0:07:25.823 ****** 2025-09-19 07:10:48.356262 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.356266 | orchestrator | 2025-09-19 07:10:48.356274 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-19 07:10:48.356279 | orchestrator | Friday 19 September 2025 07:07:36 +0000 (0:00:00.513) 0:07:26.337 ****** 2025-09-19 07:10:48.356284 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.356289 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.356294 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.356299 | orchestrator | 2025-09-19 07:10:48.356304 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-19 07:10:48.356309 | orchestrator | Friday 19 September 2025 07:07:38 +0000 (0:00:01.360) 0:07:27.698 ****** 2025-09-19 07:10:48.356314 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.356319 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.356324 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.356329 | orchestrator | 2025-09-19 07:10:48.356334 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-19 07:10:48.356339 | orchestrator | Friday 19 September 2025 07:07:39 +0000 (0:00:01.057) 0:07:28.755 ****** 2025-09-19 07:10:48.356344 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.356349 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.356354 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.356358 | orchestrator | 2025-09-19 07:10:48.356363 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-19 07:10:48.356368 | orchestrator | Friday 19 September 2025 07:07:40 +0000 (0:00:01.587) 0:07:30.343 ****** 2025-09-19 07:10:48.356373 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356381 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.356386 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.356391 | orchestrator | 2025-09-19 07:10:48.356396 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-19 07:10:48.356401 | orchestrator | Friday 19 September 2025 07:07:41 +0000 (0:00:00.317) 0:07:30.660 ****** 2025-09-19 07:10:48.356406 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356411 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.356416 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.356421 | orchestrator | 2025-09-19 07:10:48.356426 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-19 07:10:48.356431 | orchestrator | Friday 19 September 2025 07:07:41 +0000 (0:00:00.576) 0:07:31.236 ****** 2025-09-19 07:10:48.356436 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 07:10:48.356441 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-09-19 07:10:48.356446 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-09-19 07:10:48.356451 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-09-19 07:10:48.356456 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-19 07:10:48.356461 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-19 07:10:48.356466 | orchestrator | 2025-09-19 07:10:48.356471 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-19 07:10:48.356476 | orchestrator | Friday 19 September 2025 07:07:42 +0000 (0:00:00.945) 0:07:32.182 ****** 2025-09-19 07:10:48.356481 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-19 07:10:48.356486 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-19 07:10:48.356491 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-19 07:10:48.356496 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-09-19 07:10:48.356501 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-19 07:10:48.356506 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-19 07:10:48.356511 | orchestrator | 2025-09-19 07:10:48.356516 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-19 07:10:48.356521 | orchestrator | Friday 19 September 2025 07:07:44 +0000 (0:00:02.009) 0:07:34.192 ****** 2025-09-19 07:10:48.356526 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-19 07:10:48.356531 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-19 07:10:48.356538 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-19 07:10:48.356543 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-09-19 07:10:48.356548 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-19 07:10:48.356553 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-19 07:10:48.356558 | orchestrator | 2025-09-19 07:10:48.356563 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-19 07:10:48.356568 | orchestrator | Friday 19 September 2025 07:07:49 +0000 (0:00:04.435) 0:07:38.628 ****** 2025-09-19 07:10:48.356573 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356578 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.356583 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:10:48.356588 | orchestrator | 2025-09-19 07:10:48.356593 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-19 07:10:48.356598 | orchestrator | Friday 19 September 2025 07:07:52 +0000 (0:00:03.174) 0:07:41.803 ****** 2025-09-19 07:10:48.356603 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356608 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.356613 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-19 07:10:48.356618 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:10:48.356623 | orchestrator | 2025-09-19 07:10:48.356628 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-19 07:10:48.356633 | orchestrator | Friday 19 September 2025 07:08:04 +0000 (0:00:12.267) 0:07:54.070 ****** 2025-09-19 07:10:48.356641 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356646 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.356651 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.356656 | orchestrator | 2025-09-19 07:10:48.356660 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:10:48.356666 | orchestrator | Friday 19 September 2025 07:08:05 +0000 (0:00:01.023) 0:07:55.093 ****** 2025-09-19 07:10:48.356671 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356675 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.356680 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.356685 | orchestrator | 2025-09-19 07:10:48.356690 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 07:10:48.356695 | orchestrator | Friday 19 September 2025 07:08:05 +0000 (0:00:00.289) 0:07:55.383 ****** 2025-09-19 07:10:48.356703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.356708 | orchestrator | 2025-09-19 07:10:48.356713 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 07:10:48.356718 | orchestrator | Friday 19 September 2025 07:08:06 +0000 (0:00:00.547) 0:07:55.930 ****** 2025-09-19 07:10:48.356723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.356728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.356733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.356738 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356743 | orchestrator | 2025-09-19 07:10:48.356748 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 07:10:48.356753 | orchestrator | Friday 19 September 2025 07:08:07 +0000 (0:00:00.923) 0:07:56.854 ****** 2025-09-19 07:10:48.356758 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356763 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.356768 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.356773 | orchestrator | 2025-09-19 07:10:48.356778 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 07:10:48.356783 | orchestrator | Friday 19 September 2025 07:08:07 +0000 (0:00:00.324) 0:07:57.178 ****** 2025-09-19 07:10:48.356788 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356793 | orchestrator | 2025-09-19 07:10:48.356798 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 07:10:48.356803 | orchestrator | Friday 19 September 2025 07:08:07 +0000 (0:00:00.207) 0:07:57.386 ****** 2025-09-19 07:10:48.356808 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356813 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.356818 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.356822 | orchestrator | 2025-09-19 07:10:48.356827 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 07:10:48.356832 | orchestrator | Friday 19 September 2025 07:08:08 +0000 (0:00:00.374) 0:07:57.761 ****** 2025-09-19 07:10:48.356837 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356842 | orchestrator | 2025-09-19 07:10:48.356847 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 07:10:48.356852 | orchestrator | Friday 19 September 2025 07:08:08 +0000 (0:00:00.233) 0:07:57.994 ****** 2025-09-19 07:10:48.356857 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356862 | orchestrator | 2025-09-19 07:10:48.356867 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 07:10:48.356872 | orchestrator | Friday 19 September 2025 07:08:08 +0000 (0:00:00.242) 0:07:58.237 ****** 2025-09-19 07:10:48.356877 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356882 | orchestrator | 2025-09-19 07:10:48.356887 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 07:10:48.356892 | orchestrator | Friday 19 September 2025 07:08:08 +0000 (0:00:00.120) 0:07:58.358 ****** 2025-09-19 07:10:48.356897 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356905 | orchestrator | 2025-09-19 07:10:48.356910 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 07:10:48.356915 | orchestrator | Friday 19 September 2025 07:08:09 +0000 (0:00:00.218) 0:07:58.576 ****** 2025-09-19 07:10:48.356920 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356925 | orchestrator | 2025-09-19 07:10:48.356930 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 07:10:48.356935 | orchestrator | Friday 19 September 2025 07:08:09 +0000 (0:00:00.783) 0:07:59.360 ****** 2025-09-19 07:10:48.356942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.356958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.356963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.356968 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356973 | orchestrator | 2025-09-19 07:10:48.356978 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 07:10:48.356983 | orchestrator | Friday 19 September 2025 07:08:10 +0000 (0:00:00.412) 0:07:59.772 ****** 2025-09-19 07:10:48.356988 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.356993 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.356998 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.357003 | orchestrator | 2025-09-19 07:10:48.357008 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 07:10:48.357013 | orchestrator | Friday 19 September 2025 07:08:10 +0000 (0:00:00.404) 0:08:00.177 ****** 2025-09-19 07:10:48.357018 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357023 | orchestrator | 2025-09-19 07:10:48.357028 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 07:10:48.357033 | orchestrator | Friday 19 September 2025 07:08:10 +0000 (0:00:00.223) 0:08:00.400 ****** 2025-09-19 07:10:48.357038 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357043 | orchestrator | 2025-09-19 07:10:48.357048 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-19 07:10:48.357053 | orchestrator | 2025-09-19 07:10:48.357058 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:10:48.357063 | orchestrator | Friday 19 September 2025 07:08:11 +0000 (0:00:00.670) 0:08:01.071 ****** 2025-09-19 07:10:48.357068 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.357074 | orchestrator | 2025-09-19 07:10:48.357079 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:10:48.357084 | orchestrator | Friday 19 September 2025 07:08:12 +0000 (0:00:01.266) 0:08:02.337 ****** 2025-09-19 07:10:48.357092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.357097 | orchestrator | 2025-09-19 07:10:48.357102 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:10:48.357107 | orchestrator | Friday 19 September 2025 07:08:14 +0000 (0:00:01.242) 0:08:03.580 ****** 2025-09-19 07:10:48.357112 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.357117 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357122 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.357127 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.357132 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.357137 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.357142 | orchestrator | 2025-09-19 07:10:48.357146 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:10:48.357152 | orchestrator | Friday 19 September 2025 07:08:15 +0000 (0:00:00.902) 0:08:04.483 ****** 2025-09-19 07:10:48.357156 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357161 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357172 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357177 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.357182 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.357187 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.357192 | orchestrator | 2025-09-19 07:10:48.357197 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:10:48.357202 | orchestrator | Friday 19 September 2025 07:08:16 +0000 (0:00:00.950) 0:08:05.434 ****** 2025-09-19 07:10:48.357207 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357212 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357217 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357222 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.357227 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.357232 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.357237 | orchestrator | 2025-09-19 07:10:48.357242 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:10:48.357247 | orchestrator | Friday 19 September 2025 07:08:17 +0000 (0:00:01.189) 0:08:06.623 ****** 2025-09-19 07:10:48.357252 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357257 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357262 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357267 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.357272 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.357277 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.357282 | orchestrator | 2025-09-19 07:10:48.357287 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:10:48.357292 | orchestrator | Friday 19 September 2025 07:08:17 +0000 (0:00:00.798) 0:08:07.421 ****** 2025-09-19 07:10:48.357297 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.357302 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357307 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.357312 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.357317 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.357322 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.357327 | orchestrator | 2025-09-19 07:10:48.357332 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:10:48.357337 | orchestrator | Friday 19 September 2025 07:08:18 +0000 (0:00:00.817) 0:08:08.239 ****** 2025-09-19 07:10:48.357342 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357347 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357351 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357356 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357361 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.357366 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.357371 | orchestrator | 2025-09-19 07:10:48.357376 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:10:48.357381 | orchestrator | Friday 19 September 2025 07:08:19 +0000 (0:00:00.488) 0:08:08.727 ****** 2025-09-19 07:10:48.357389 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357394 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357399 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357404 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357409 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.357414 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.357419 | orchestrator | 2025-09-19 07:10:48.357424 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:10:48.357429 | orchestrator | Friday 19 September 2025 07:08:19 +0000 (0:00:00.637) 0:08:09.365 ****** 2025-09-19 07:10:48.357434 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.357439 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.357443 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.357448 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.357453 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.357458 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.357466 | orchestrator | 2025-09-19 07:10:48.357471 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:10:48.357476 | orchestrator | Friday 19 September 2025 07:08:20 +0000 (0:00:00.865) 0:08:10.231 ****** 2025-09-19 07:10:48.357481 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.357486 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.357491 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.357496 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.357501 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.357506 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.357511 | orchestrator | 2025-09-19 07:10:48.357516 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:10:48.357521 | orchestrator | Friday 19 September 2025 07:08:22 +0000 (0:00:01.201) 0:08:11.432 ****** 2025-09-19 07:10:48.357526 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357531 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357536 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357541 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357546 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.357551 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.357555 | orchestrator | 2025-09-19 07:10:48.357560 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:10:48.357565 | orchestrator | Friday 19 September 2025 07:08:22 +0000 (0:00:00.607) 0:08:12.040 ****** 2025-09-19 07:10:48.357570 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.357575 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.357580 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.357585 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357590 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.357597 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.357603 | orchestrator | 2025-09-19 07:10:48.357608 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:10:48.357613 | orchestrator | Friday 19 September 2025 07:08:23 +0000 (0:00:00.857) 0:08:12.897 ****** 2025-09-19 07:10:48.357618 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357623 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357628 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357632 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.357637 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.357642 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.357647 | orchestrator | 2025-09-19 07:10:48.357652 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:10:48.357657 | orchestrator | Friday 19 September 2025 07:08:24 +0000 (0:00:00.607) 0:08:13.505 ****** 2025-09-19 07:10:48.357662 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357667 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357672 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357677 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.357682 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.357687 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.357692 | orchestrator | 2025-09-19 07:10:48.357697 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:10:48.357702 | orchestrator | Friday 19 September 2025 07:08:24 +0000 (0:00:00.837) 0:08:14.342 ****** 2025-09-19 07:10:48.357707 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357712 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357717 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357722 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.357727 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.357732 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.357737 | orchestrator | 2025-09-19 07:10:48.357742 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:10:48.357747 | orchestrator | Friday 19 September 2025 07:08:25 +0000 (0:00:00.628) 0:08:14.970 ****** 2025-09-19 07:10:48.357752 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357760 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357765 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357770 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357774 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.357779 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.357784 | orchestrator | 2025-09-19 07:10:48.357789 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:10:48.357794 | orchestrator | Friday 19 September 2025 07:08:26 +0000 (0:00:00.818) 0:08:15.789 ****** 2025-09-19 07:10:48.357799 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:48.357804 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:48.357809 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:48.357814 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357819 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.357824 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.357829 | orchestrator | 2025-09-19 07:10:48.357834 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:10:48.357839 | orchestrator | Friday 19 September 2025 07:08:26 +0000 (0:00:00.582) 0:08:16.372 ****** 2025-09-19 07:10:48.357844 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.357849 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.357854 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.357859 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.357864 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.357869 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.357874 | orchestrator | 2025-09-19 07:10:48.357879 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:10:48.357886 | orchestrator | Friday 19 September 2025 07:08:27 +0000 (0:00:00.824) 0:08:17.196 ****** 2025-09-19 07:10:48.357891 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.357896 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.357901 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.357906 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.357911 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.357916 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.357921 | orchestrator | 2025-09-19 07:10:48.357926 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:10:48.357931 | orchestrator | Friday 19 September 2025 07:08:28 +0000 (0:00:00.610) 0:08:17.806 ****** 2025-09-19 07:10:48.357936 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.357941 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.357956 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.357961 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.357966 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.357971 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.357976 | orchestrator | 2025-09-19 07:10:48.357981 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-19 07:10:48.357986 | orchestrator | Friday 19 September 2025 07:08:29 +0000 (0:00:01.255) 0:08:19.062 ****** 2025-09-19 07:10:48.357991 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.357996 | orchestrator | 2025-09-19 07:10:48.358001 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-19 07:10:48.358006 | orchestrator | Friday 19 September 2025 07:08:33 +0000 (0:00:03.800) 0:08:22.863 ****** 2025-09-19 07:10:48.358011 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.358031 | orchestrator | 2025-09-19 07:10:48.358036 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-19 07:10:48.358041 | orchestrator | Friday 19 September 2025 07:08:35 +0000 (0:00:02.425) 0:08:25.288 ****** 2025-09-19 07:10:48.358046 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.358051 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.358056 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.358061 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.358066 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.358071 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.358079 | orchestrator | 2025-09-19 07:10:48.358085 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-19 07:10:48.358090 | orchestrator | Friday 19 September 2025 07:08:37 +0000 (0:00:01.514) 0:08:26.803 ****** 2025-09-19 07:10:48.358095 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.358100 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.358105 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.358112 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.358117 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.358122 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.358127 | orchestrator | 2025-09-19 07:10:48.358132 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-19 07:10:48.358137 | orchestrator | Friday 19 September 2025 07:08:38 +0000 (0:00:01.200) 0:08:28.003 ****** 2025-09-19 07:10:48.358142 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.358148 | orchestrator | 2025-09-19 07:10:48.358153 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-19 07:10:48.358158 | orchestrator | Friday 19 September 2025 07:08:39 +0000 (0:00:01.220) 0:08:29.224 ****** 2025-09-19 07:10:48.358162 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.358167 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.358181 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.358230 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.358236 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.358241 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.358246 | orchestrator | 2025-09-19 07:10:48.358251 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-19 07:10:48.358256 | orchestrator | Friday 19 September 2025 07:08:41 +0000 (0:00:01.530) 0:08:30.754 ****** 2025-09-19 07:10:48.358261 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.358266 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.358271 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.358276 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.358281 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.358286 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.358291 | orchestrator | 2025-09-19 07:10:48.358296 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-19 07:10:48.358301 | orchestrator | Friday 19 September 2025 07:08:44 +0000 (0:00:03.548) 0:08:34.302 ****** 2025-09-19 07:10:48.358306 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.358311 | orchestrator | 2025-09-19 07:10:48.358316 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-19 07:10:48.358321 | orchestrator | Friday 19 September 2025 07:08:46 +0000 (0:00:01.292) 0:08:35.595 ****** 2025-09-19 07:10:48.358326 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.358331 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.358336 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.358341 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.358346 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.358351 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.358355 | orchestrator | 2025-09-19 07:10:48.358361 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-19 07:10:48.358365 | orchestrator | Friday 19 September 2025 07:08:47 +0000 (0:00:00.845) 0:08:36.440 ****** 2025-09-19 07:10:48.358370 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:48.358375 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.358380 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.358385 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:48.358390 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.358395 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:48.358404 | orchestrator | 2025-09-19 07:10:48.358409 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-19 07:10:48.358414 | orchestrator | Friday 19 September 2025 07:08:49 +0000 (0:00:02.433) 0:08:38.873 ****** 2025-09-19 07:10:48.358419 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:48.358428 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:48.358433 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:48.358438 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.358443 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.358448 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.358453 | orchestrator | 2025-09-19 07:10:48.358458 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-19 07:10:48.358463 | orchestrator | 2025-09-19 07:10:48.358468 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:10:48.358473 | orchestrator | Friday 19 September 2025 07:08:50 +0000 (0:00:00.848) 0:08:39.722 ****** 2025-09-19 07:10:48.358478 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.358483 | orchestrator | 2025-09-19 07:10:48.358488 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:10:48.358493 | orchestrator | Friday 19 September 2025 07:08:51 +0000 (0:00:00.722) 0:08:40.444 ****** 2025-09-19 07:10:48.358498 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.358503 | orchestrator | 2025-09-19 07:10:48.358508 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:10:48.358513 | orchestrator | Friday 19 September 2025 07:08:51 +0000 (0:00:00.534) 0:08:40.979 ****** 2025-09-19 07:10:48.358518 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.358523 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.358528 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.358533 | orchestrator | 2025-09-19 07:10:48.358538 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:10:48.358543 | orchestrator | Friday 19 September 2025 07:08:51 +0000 (0:00:00.307) 0:08:41.287 ****** 2025-09-19 07:10:48.358548 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.358553 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.358558 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.358563 | orchestrator | 2025-09-19 07:10:48.358568 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:10:48.358573 | orchestrator | Friday 19 September 2025 07:08:52 +0000 (0:00:01.022) 0:08:42.309 ****** 2025-09-19 07:10:48.358578 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.358583 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.358588 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.358593 | orchestrator | 2025-09-19 07:10:48.358601 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:10:48.358606 | orchestrator | Friday 19 September 2025 07:08:53 +0000 (0:00:00.666) 0:08:42.975 ****** 2025-09-19 07:10:48.358611 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.358616 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.358621 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.358626 | orchestrator | 2025-09-19 07:10:48.358630 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:10:48.358635 | orchestrator | Friday 19 September 2025 07:08:54 +0000 (0:00:00.650) 0:08:43.626 ****** 2025-09-19 07:10:48.358640 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.358645 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.358650 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.358655 | orchestrator | 2025-09-19 07:10:48.358661 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:10:48.358666 | orchestrator | Friday 19 September 2025 07:08:54 +0000 (0:00:00.263) 0:08:43.889 ****** 2025-09-19 07:10:48.358671 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.358679 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.358684 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.358689 | orchestrator | 2025-09-19 07:10:48.358694 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:10:48.358699 | orchestrator | Friday 19 September 2025 07:08:54 +0000 (0:00:00.427) 0:08:44.317 ****** 2025-09-19 07:10:48.358704 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.358709 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.358714 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.358719 | orchestrator | 2025-09-19 07:10:48.358724 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:10:48.358729 | orchestrator | Friday 19 September 2025 07:08:55 +0000 (0:00:00.295) 0:08:44.613 ****** 2025-09-19 07:10:48.358734 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.358739 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.358744 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.358749 | orchestrator | 2025-09-19 07:10:48.358753 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:10:48.358758 | orchestrator | Friday 19 September 2025 07:08:55 +0000 (0:00:00.692) 0:08:45.306 ****** 2025-09-19 07:10:48.358763 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.358768 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.358773 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.358778 | orchestrator | 2025-09-19 07:10:48.358783 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:10:48.358788 | orchestrator | Friday 19 September 2025 07:08:56 +0000 (0:00:00.690) 0:08:45.996 ****** 2025-09-19 07:10:48.358793 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.358798 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.358803 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.358808 | orchestrator | 2025-09-19 07:10:48.358813 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:10:48.358818 | orchestrator | Friday 19 September 2025 07:08:56 +0000 (0:00:00.433) 0:08:46.430 ****** 2025-09-19 07:10:48.358823 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.358828 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.358833 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.358838 | orchestrator | 2025-09-19 07:10:48.358843 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:10:48.358848 | orchestrator | Friday 19 September 2025 07:08:57 +0000 (0:00:00.282) 0:08:46.712 ****** 2025-09-19 07:10:48.358853 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.358858 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.358863 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.358868 | orchestrator | 2025-09-19 07:10:48.358875 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:10:48.358880 | orchestrator | Friday 19 September 2025 07:08:57 +0000 (0:00:00.304) 0:08:47.017 ****** 2025-09-19 07:10:48.358885 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.358890 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.358895 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.358900 | orchestrator | 2025-09-19 07:10:48.358905 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:10:48.358910 | orchestrator | Friday 19 September 2025 07:08:57 +0000 (0:00:00.278) 0:08:47.296 ****** 2025-09-19 07:10:48.358915 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.358920 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.358925 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.358930 | orchestrator | 2025-09-19 07:10:48.358935 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:10:48.358940 | orchestrator | Friday 19 September 2025 07:08:58 +0000 (0:00:00.445) 0:08:47.741 ****** 2025-09-19 07:10:48.358973 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.358980 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.358989 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.358994 | orchestrator | 2025-09-19 07:10:48.358999 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:10:48.359004 | orchestrator | Friday 19 September 2025 07:08:58 +0000 (0:00:00.279) 0:08:48.021 ****** 2025-09-19 07:10:48.359009 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.359014 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.359019 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.359024 | orchestrator | 2025-09-19 07:10:48.359029 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:10:48.359034 | orchestrator | Friday 19 September 2025 07:08:58 +0000 (0:00:00.260) 0:08:48.282 ****** 2025-09-19 07:10:48.359038 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.359043 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.359048 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.359053 | orchestrator | 2025-09-19 07:10:48.359058 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:10:48.359063 | orchestrator | Friday 19 September 2025 07:08:59 +0000 (0:00:00.267) 0:08:48.549 ****** 2025-09-19 07:10:48.359068 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.359073 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.359078 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.359083 | orchestrator | 2025-09-19 07:10:48.359088 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:10:48.359096 | orchestrator | Friday 19 September 2025 07:08:59 +0000 (0:00:00.468) 0:08:49.018 ****** 2025-09-19 07:10:48.359101 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.359106 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.359111 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.359116 | orchestrator | 2025-09-19 07:10:48.359121 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-19 07:10:48.359127 | orchestrator | Friday 19 September 2025 07:09:00 +0000 (0:00:00.459) 0:08:49.478 ****** 2025-09-19 07:10:48.359131 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.359136 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.359141 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-19 07:10:48.359146 | orchestrator | 2025-09-19 07:10:48.359150 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-19 07:10:48.359155 | orchestrator | Friday 19 September 2025 07:09:00 +0000 (0:00:00.488) 0:08:49.967 ****** 2025-09-19 07:10:48.359160 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:10:48.359164 | orchestrator | 2025-09-19 07:10:48.359169 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-19 07:10:48.359198 | orchestrator | Friday 19 September 2025 07:09:02 +0000 (0:00:02.102) 0:08:52.069 ****** 2025-09-19 07:10:48.359204 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-19 07:10:48.359210 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.359215 | orchestrator | 2025-09-19 07:10:48.359219 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-19 07:10:48.359224 | orchestrator | Friday 19 September 2025 07:09:02 +0000 (0:00:00.209) 0:08:52.279 ****** 2025-09-19 07:10:48.359229 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:10:48.359239 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:10:48.359248 | orchestrator | 2025-09-19 07:10:48.359252 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-19 07:10:48.359257 | orchestrator | Friday 19 September 2025 07:09:11 +0000 (0:00:08.607) 0:09:00.887 ****** 2025-09-19 07:10:48.359262 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:10:48.359267 | orchestrator | 2025-09-19 07:10:48.359271 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-19 07:10:48.359276 | orchestrator | Friday 19 September 2025 07:09:15 +0000 (0:00:03.642) 0:09:04.530 ****** 2025-09-19 07:10:48.359281 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.359286 | orchestrator | 2025-09-19 07:10:48.359293 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-19 07:10:48.359298 | orchestrator | Friday 19 September 2025 07:09:15 +0000 (0:00:00.617) 0:09:05.147 ****** 2025-09-19 07:10:48.359303 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 07:10:48.359308 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-19 07:10:48.359312 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 07:10:48.359317 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 07:10:48.359322 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-19 07:10:48.359327 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-19 07:10:48.359331 | orchestrator | 2025-09-19 07:10:48.359336 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-19 07:10:48.359341 | orchestrator | Friday 19 September 2025 07:09:17 +0000 (0:00:01.309) 0:09:06.456 ****** 2025-09-19 07:10:48.359345 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.359350 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:10:48.359355 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:10:48.359359 | orchestrator | 2025-09-19 07:10:48.359364 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-19 07:10:48.359369 | orchestrator | Friday 19 September 2025 07:09:19 +0000 (0:00:02.060) 0:09:08.517 ****** 2025-09-19 07:10:48.359373 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:10:48.359378 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:10:48.359383 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.359387 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:10:48.359392 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:10:48.359397 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 07:10:48.359401 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.359406 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 07:10:48.359411 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.359415 | orchestrator | 2025-09-19 07:10:48.359420 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-19 07:10:48.359428 | orchestrator | Friday 19 September 2025 07:09:20 +0000 (0:00:01.220) 0:09:09.738 ****** 2025-09-19 07:10:48.359433 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.359437 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.359442 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.359447 | orchestrator | 2025-09-19 07:10:48.359451 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-19 07:10:48.359456 | orchestrator | Friday 19 September 2025 07:09:23 +0000 (0:00:02.766) 0:09:12.504 ****** 2025-09-19 07:10:48.359461 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.359465 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.359470 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.359475 | orchestrator | 2025-09-19 07:10:48.359479 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-19 07:10:48.359488 | orchestrator | Friday 19 September 2025 07:09:23 +0000 (0:00:00.449) 0:09:12.954 ****** 2025-09-19 07:10:48.359492 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.359497 | orchestrator | 2025-09-19 07:10:48.359502 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-19 07:10:48.359507 | orchestrator | Friday 19 September 2025 07:09:24 +0000 (0:00:00.510) 0:09:13.464 ****** 2025-09-19 07:10:48.359511 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.359516 | orchestrator | 2025-09-19 07:10:48.359521 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-19 07:10:48.359525 | orchestrator | Friday 19 September 2025 07:09:24 +0000 (0:00:00.634) 0:09:14.099 ****** 2025-09-19 07:10:48.359530 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.359535 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.359539 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.359544 | orchestrator | 2025-09-19 07:10:48.359549 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-19 07:10:48.359553 | orchestrator | Friday 19 September 2025 07:09:25 +0000 (0:00:01.241) 0:09:15.340 ****** 2025-09-19 07:10:48.359558 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.359563 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.359567 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.359572 | orchestrator | 2025-09-19 07:10:48.359576 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-19 07:10:48.359581 | orchestrator | Friday 19 September 2025 07:09:27 +0000 (0:00:01.161) 0:09:16.502 ****** 2025-09-19 07:10:48.359586 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.359590 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.359595 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.359600 | orchestrator | 2025-09-19 07:10:48.359604 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-19 07:10:48.359609 | orchestrator | Friday 19 September 2025 07:09:28 +0000 (0:00:01.788) 0:09:18.291 ****** 2025-09-19 07:10:48.359614 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.359618 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.359623 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.359628 | orchestrator | 2025-09-19 07:10:48.359632 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-19 07:10:48.359637 | orchestrator | Friday 19 September 2025 07:09:31 +0000 (0:00:02.475) 0:09:20.766 ****** 2025-09-19 07:10:48.359642 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.359646 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.359651 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.359656 | orchestrator | 2025-09-19 07:10:48.359663 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:10:48.359667 | orchestrator | Friday 19 September 2025 07:09:32 +0000 (0:00:01.416) 0:09:22.183 ****** 2025-09-19 07:10:48.359672 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.359677 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.359682 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.359686 | orchestrator | 2025-09-19 07:10:48.359691 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 07:10:48.359696 | orchestrator | Friday 19 September 2025 07:09:33 +0000 (0:00:00.887) 0:09:23.070 ****** 2025-09-19 07:10:48.359700 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.359705 | orchestrator | 2025-09-19 07:10:48.359710 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 07:10:48.359714 | orchestrator | Friday 19 September 2025 07:09:34 +0000 (0:00:00.481) 0:09:23.552 ****** 2025-09-19 07:10:48.359719 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.359727 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.359732 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.359736 | orchestrator | 2025-09-19 07:10:48.359741 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 07:10:48.359746 | orchestrator | Friday 19 September 2025 07:09:34 +0000 (0:00:00.370) 0:09:23.923 ****** 2025-09-19 07:10:48.359751 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.359755 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.359760 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.359765 | orchestrator | 2025-09-19 07:10:48.359769 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 07:10:48.359774 | orchestrator | Friday 19 September 2025 07:09:35 +0000 (0:00:01.152) 0:09:25.075 ****** 2025-09-19 07:10:48.359779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.359783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.359788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.359793 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.359797 | orchestrator | 2025-09-19 07:10:48.359802 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 07:10:48.359807 | orchestrator | Friday 19 September 2025 07:09:36 +0000 (0:00:01.057) 0:09:26.133 ****** 2025-09-19 07:10:48.359811 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.359816 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.359825 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.359830 | orchestrator | 2025-09-19 07:10:48.359834 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 07:10:48.359839 | orchestrator | 2025-09-19 07:10:48.359844 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:10:48.359848 | orchestrator | Friday 19 September 2025 07:09:37 +0000 (0:00:00.603) 0:09:26.736 ****** 2025-09-19 07:10:48.359853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.359858 | orchestrator | 2025-09-19 07:10:48.359862 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:10:48.359867 | orchestrator | Friday 19 September 2025 07:09:38 +0000 (0:00:00.700) 0:09:27.437 ****** 2025-09-19 07:10:48.359872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.359876 | orchestrator | 2025-09-19 07:10:48.359881 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:10:48.359886 | orchestrator | Friday 19 September 2025 07:09:38 +0000 (0:00:00.521) 0:09:27.959 ****** 2025-09-19 07:10:48.359890 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.359895 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.359900 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.359905 | orchestrator | 2025-09-19 07:10:48.359909 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:10:48.359914 | orchestrator | Friday 19 September 2025 07:09:38 +0000 (0:00:00.295) 0:09:28.254 ****** 2025-09-19 07:10:48.359919 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.359923 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.359928 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.359933 | orchestrator | 2025-09-19 07:10:48.359937 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:10:48.359942 | orchestrator | Friday 19 September 2025 07:09:39 +0000 (0:00:00.903) 0:09:29.158 ****** 2025-09-19 07:10:48.359957 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.359961 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.359966 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.359971 | orchestrator | 2025-09-19 07:10:48.359976 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:10:48.359980 | orchestrator | Friday 19 September 2025 07:09:40 +0000 (0:00:00.720) 0:09:29.879 ****** 2025-09-19 07:10:48.359988 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.359993 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.359998 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.360002 | orchestrator | 2025-09-19 07:10:48.360007 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:10:48.360012 | orchestrator | Friday 19 September 2025 07:09:41 +0000 (0:00:00.756) 0:09:30.635 ****** 2025-09-19 07:10:48.360016 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360021 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360026 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360031 | orchestrator | 2025-09-19 07:10:48.360035 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:10:48.360040 | orchestrator | Friday 19 September 2025 07:09:41 +0000 (0:00:00.320) 0:09:30.956 ****** 2025-09-19 07:10:48.360045 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360049 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360054 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360059 | orchestrator | 2025-09-19 07:10:48.360064 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:10:48.360071 | orchestrator | Friday 19 September 2025 07:09:42 +0000 (0:00:00.545) 0:09:31.502 ****** 2025-09-19 07:10:48.360075 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360080 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360085 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360090 | orchestrator | 2025-09-19 07:10:48.360094 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:10:48.360099 | orchestrator | Friday 19 September 2025 07:09:42 +0000 (0:00:00.348) 0:09:31.850 ****** 2025-09-19 07:10:48.360104 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.360108 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.360113 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.360118 | orchestrator | 2025-09-19 07:10:48.360122 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:10:48.360127 | orchestrator | Friday 19 September 2025 07:09:43 +0000 (0:00:00.741) 0:09:32.592 ****** 2025-09-19 07:10:48.360132 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.360136 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.360141 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.360146 | orchestrator | 2025-09-19 07:10:48.360151 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:10:48.360155 | orchestrator | Friday 19 September 2025 07:09:43 +0000 (0:00:00.760) 0:09:33.352 ****** 2025-09-19 07:10:48.360160 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360165 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360169 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360174 | orchestrator | 2025-09-19 07:10:48.360179 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:10:48.360183 | orchestrator | Friday 19 September 2025 07:09:44 +0000 (0:00:00.572) 0:09:33.925 ****** 2025-09-19 07:10:48.360188 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360193 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360197 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360202 | orchestrator | 2025-09-19 07:10:48.360207 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:10:48.360211 | orchestrator | Friday 19 September 2025 07:09:44 +0000 (0:00:00.307) 0:09:34.232 ****** 2025-09-19 07:10:48.360216 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.360221 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.360225 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.360230 | orchestrator | 2025-09-19 07:10:48.360235 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:10:48.360239 | orchestrator | Friday 19 September 2025 07:09:45 +0000 (0:00:00.331) 0:09:34.563 ****** 2025-09-19 07:10:48.360247 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.360255 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.360260 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.360265 | orchestrator | 2025-09-19 07:10:48.360269 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:10:48.360274 | orchestrator | Friday 19 September 2025 07:09:45 +0000 (0:00:00.345) 0:09:34.909 ****** 2025-09-19 07:10:48.360279 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.360283 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.360288 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.360293 | orchestrator | 2025-09-19 07:10:48.360297 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:10:48.360302 | orchestrator | Friday 19 September 2025 07:09:46 +0000 (0:00:00.588) 0:09:35.497 ****** 2025-09-19 07:10:48.360307 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360311 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360316 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360321 | orchestrator | 2025-09-19 07:10:48.360326 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:10:48.360330 | orchestrator | Friday 19 September 2025 07:09:46 +0000 (0:00:00.317) 0:09:35.815 ****** 2025-09-19 07:10:48.360335 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360340 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360344 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360349 | orchestrator | 2025-09-19 07:10:48.360354 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:10:48.360358 | orchestrator | Friday 19 September 2025 07:09:46 +0000 (0:00:00.301) 0:09:36.116 ****** 2025-09-19 07:10:48.360363 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360368 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360372 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360377 | orchestrator | 2025-09-19 07:10:48.360382 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:10:48.360386 | orchestrator | Friday 19 September 2025 07:09:46 +0000 (0:00:00.307) 0:09:36.424 ****** 2025-09-19 07:10:48.360391 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.360396 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.360400 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.360405 | orchestrator | 2025-09-19 07:10:48.360410 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:10:48.360414 | orchestrator | Friday 19 September 2025 07:09:47 +0000 (0:00:00.596) 0:09:37.020 ****** 2025-09-19 07:10:48.360419 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.360424 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.360428 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.360433 | orchestrator | 2025-09-19 07:10:48.360438 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-19 07:10:48.360443 | orchestrator | Friday 19 September 2025 07:09:48 +0000 (0:00:00.564) 0:09:37.585 ****** 2025-09-19 07:10:48.360447 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.360452 | orchestrator | 2025-09-19 07:10:48.360457 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 07:10:48.360461 | orchestrator | Friday 19 September 2025 07:09:48 +0000 (0:00:00.798) 0:09:38.383 ****** 2025-09-19 07:10:48.360466 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.360471 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:10:48.360476 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:10:48.360480 | orchestrator | 2025-09-19 07:10:48.360485 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 07:10:48.360492 | orchestrator | Friday 19 September 2025 07:09:51 +0000 (0:00:02.130) 0:09:40.514 ****** 2025-09-19 07:10:48.360496 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:10:48.360501 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:10:48.360509 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.360514 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:10:48.360518 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 07:10:48.360523 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.360528 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:10:48.360532 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 07:10:48.360537 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.360542 | orchestrator | 2025-09-19 07:10:48.360547 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-19 07:10:48.360551 | orchestrator | Friday 19 September 2025 07:09:52 +0000 (0:00:01.156) 0:09:41.670 ****** 2025-09-19 07:10:48.360556 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360561 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360565 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360570 | orchestrator | 2025-09-19 07:10:48.360575 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-19 07:10:48.360579 | orchestrator | Friday 19 September 2025 07:09:52 +0000 (0:00:00.296) 0:09:41.967 ****** 2025-09-19 07:10:48.360584 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.360589 | orchestrator | 2025-09-19 07:10:48.360593 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-19 07:10:48.360598 | orchestrator | Friday 19 September 2025 07:09:53 +0000 (0:00:00.606) 0:09:42.574 ****** 2025-09-19 07:10:48.360603 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.360608 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.360615 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.360620 | orchestrator | 2025-09-19 07:10:48.360625 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-19 07:10:48.360629 | orchestrator | Friday 19 September 2025 07:09:53 +0000 (0:00:00.737) 0:09:43.311 ****** 2025-09-19 07:10:48.360634 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.360639 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 07:10:48.360643 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.360648 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 07:10:48.360653 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.360658 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 07:10:48.360662 | orchestrator | 2025-09-19 07:10:48.360667 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 07:10:48.360672 | orchestrator | Friday 19 September 2025 07:09:58 +0000 (0:00:04.424) 0:09:47.736 ****** 2025-09-19 07:10:48.360676 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.360681 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.360686 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:10:48.360690 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:10:48.360695 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:10:48.360704 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:10:48.360709 | orchestrator | 2025-09-19 07:10:48.360713 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 07:10:48.360718 | orchestrator | Friday 19 September 2025 07:10:00 +0000 (0:00:02.226) 0:09:49.963 ****** 2025-09-19 07:10:48.360723 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:10:48.360727 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.360732 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:10:48.360737 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.360741 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:10:48.360746 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.360751 | orchestrator | 2025-09-19 07:10:48.360755 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-19 07:10:48.360760 | orchestrator | Friday 19 September 2025 07:10:01 +0000 (0:00:01.381) 0:09:51.344 ****** 2025-09-19 07:10:48.360765 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-19 07:10:48.360770 | orchestrator | 2025-09-19 07:10:48.360774 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-19 07:10:48.360779 | orchestrator | Friday 19 September 2025 07:10:02 +0000 (0:00:00.220) 0:09:51.565 ****** 2025-09-19 07:10:48.360786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:10:48.360792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:10:48.360796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:10:48.360801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:10:48.360806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:10:48.360811 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360815 | orchestrator | 2025-09-19 07:10:48.360820 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-19 07:10:48.360824 | orchestrator | Friday 19 September 2025 07:10:02 +0000 (0:00:00.537) 0:09:52.103 ****** 2025-09-19 07:10:48.360829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:10:48.360834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:10:48.360839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:10:48.360843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:10:48.360848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:10:48.360853 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360857 | orchestrator | 2025-09-19 07:10:48.360862 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-19 07:10:48.360867 | orchestrator | Friday 19 September 2025 07:10:03 +0000 (0:00:00.525) 0:09:52.628 ****** 2025-09-19 07:10:48.360872 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 07:10:48.360877 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 07:10:48.360885 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 07:10:48.360890 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 07:10:48.360895 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 07:10:48.360900 | orchestrator | 2025-09-19 07:10:48.360904 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-19 07:10:48.360909 | orchestrator | Friday 19 September 2025 07:10:33 +0000 (0:00:30.773) 0:10:23.402 ****** 2025-09-19 07:10:48.360914 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360918 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360923 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360928 | orchestrator | 2025-09-19 07:10:48.360932 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-19 07:10:48.360937 | orchestrator | Friday 19 September 2025 07:10:34 +0000 (0:00:00.315) 0:10:23.717 ****** 2025-09-19 07:10:48.360942 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.360957 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.360962 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.360967 | orchestrator | 2025-09-19 07:10:48.360971 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-19 07:10:48.360976 | orchestrator | Friday 19 September 2025 07:10:34 +0000 (0:00:00.569) 0:10:24.286 ****** 2025-09-19 07:10:48.360981 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.360985 | orchestrator | 2025-09-19 07:10:48.360990 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-19 07:10:48.360995 | orchestrator | Friday 19 September 2025 07:10:35 +0000 (0:00:00.552) 0:10:24.838 ****** 2025-09-19 07:10:48.361000 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.361004 | orchestrator | 2025-09-19 07:10:48.361009 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-19 07:10:48.361014 | orchestrator | Friday 19 September 2025 07:10:35 +0000 (0:00:00.514) 0:10:25.353 ****** 2025-09-19 07:10:48.361018 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.361023 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.361028 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.361033 | orchestrator | 2025-09-19 07:10:48.361037 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-19 07:10:48.361042 | orchestrator | Friday 19 September 2025 07:10:37 +0000 (0:00:01.615) 0:10:26.968 ****** 2025-09-19 07:10:48.361049 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.361054 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.361059 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.361063 | orchestrator | 2025-09-19 07:10:48.361068 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-19 07:10:48.361073 | orchestrator | Friday 19 September 2025 07:10:38 +0000 (0:00:01.171) 0:10:28.139 ****** 2025-09-19 07:10:48.361077 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:10:48.361082 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:10:48.361087 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:10:48.361091 | orchestrator | 2025-09-19 07:10:48.361096 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-19 07:10:48.361101 | orchestrator | Friday 19 September 2025 07:10:40 +0000 (0:00:01.797) 0:10:29.937 ****** 2025-09-19 07:10:48.361105 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.361114 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.361140 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 07:10:48.361145 | orchestrator | 2025-09-19 07:10:48.361149 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:10:48.361154 | orchestrator | Friday 19 September 2025 07:10:42 +0000 (0:00:02.407) 0:10:32.344 ****** 2025-09-19 07:10:48.361159 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.361163 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.361168 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.361173 | orchestrator | 2025-09-19 07:10:48.361177 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 07:10:48.361182 | orchestrator | Friday 19 September 2025 07:10:43 +0000 (0:00:00.300) 0:10:32.644 ****** 2025-09-19 07:10:48.361187 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:10:48.361192 | orchestrator | 2025-09-19 07:10:48.361196 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 07:10:48.361203 | orchestrator | Friday 19 September 2025 07:10:43 +0000 (0:00:00.580) 0:10:33.225 ****** 2025-09-19 07:10:48.361208 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.361213 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.361218 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.361222 | orchestrator | 2025-09-19 07:10:48.361227 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 07:10:48.361232 | orchestrator | Friday 19 September 2025 07:10:44 +0000 (0:00:00.312) 0:10:33.537 ****** 2025-09-19 07:10:48.361236 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.361241 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:10:48.361246 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:10:48.361250 | orchestrator | 2025-09-19 07:10:48.361255 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 07:10:48.361260 | orchestrator | Friday 19 September 2025 07:10:44 +0000 (0:00:00.330) 0:10:33.868 ****** 2025-09-19 07:10:48.361264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:10:48.361269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:10:48.361274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:10:48.361279 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:10:48.361283 | orchestrator | 2025-09-19 07:10:48.361288 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 07:10:48.361293 | orchestrator | Friday 19 September 2025 07:10:45 +0000 (0:00:00.846) 0:10:34.715 ****** 2025-09-19 07:10:48.361297 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:10:48.361302 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:10:48.361307 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:10:48.361311 | orchestrator | 2025-09-19 07:10:48.361316 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:10:48.361321 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-09-19 07:10:48.361326 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-19 07:10:48.361330 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-19 07:10:48.361335 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-09-19 07:10:48.361340 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-19 07:10:48.361348 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-19 07:10:48.361353 | orchestrator | 2025-09-19 07:10:48.361357 | orchestrator | 2025-09-19 07:10:48.361362 | orchestrator | 2025-09-19 07:10:48.361367 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:10:48.361371 | orchestrator | Friday 19 September 2025 07:10:45 +0000 (0:00:00.240) 0:10:34.955 ****** 2025-09-19 07:10:48.361376 | orchestrator | =============================================================================== 2025-09-19 07:10:48.361381 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 47.44s 2025-09-19 07:10:48.361388 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 47.21s 2025-09-19 07:10:48.361393 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.77s 2025-09-19 07:10:48.361397 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.20s 2025-09-19 07:10:48.361402 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.64s 2025-09-19 07:10:48.361407 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.27s 2025-09-19 07:10:48.361412 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.67s 2025-09-19 07:10:48.361416 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.95s 2025-09-19 07:10:48.361421 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.61s 2025-09-19 07:10:48.361426 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.90s 2025-09-19 07:10:48.361430 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.55s 2025-09-19 07:10:48.361435 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.41s 2025-09-19 07:10:48.361440 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.44s 2025-09-19 07:10:48.361444 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.42s 2025-09-19 07:10:48.361449 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.86s 2025-09-19 07:10:48.361454 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.80s 2025-09-19 07:10:48.361458 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.64s 2025-09-19 07:10:48.361463 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.55s 2025-09-19 07:10:48.361468 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.45s 2025-09-19 07:10:48.361472 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.17s 2025-09-19 07:10:48.361477 | orchestrator | 2025-09-19 07:10:48 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:48.361484 | orchestrator | 2025-09-19 07:10:48 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:48.361489 | orchestrator | 2025-09-19 07:10:48 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:10:48.361494 | orchestrator | 2025-09-19 07:10:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:51.389414 | orchestrator | 2025-09-19 07:10:51 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:51.391186 | orchestrator | 2025-09-19 07:10:51 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:51.392244 | orchestrator | 2025-09-19 07:10:51 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:10:51.392267 | orchestrator | 2025-09-19 07:10:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:54.440302 | orchestrator | 2025-09-19 07:10:54 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:54.441695 | orchestrator | 2025-09-19 07:10:54 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:54.443211 | orchestrator | 2025-09-19 07:10:54 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:10:54.443329 | orchestrator | 2025-09-19 07:10:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:57.496622 | orchestrator | 2025-09-19 07:10:57 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:10:57.499162 | orchestrator | 2025-09-19 07:10:57 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:10:57.500988 | orchestrator | 2025-09-19 07:10:57 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:10:57.501029 | orchestrator | 2025-09-19 07:10:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:00.554298 | orchestrator | 2025-09-19 07:11:00 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:00.557607 | orchestrator | 2025-09-19 07:11:00 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:00.558902 | orchestrator | 2025-09-19 07:11:00 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:00.559243 | orchestrator | 2025-09-19 07:11:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:03.613701 | orchestrator | 2025-09-19 07:11:03 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:03.613808 | orchestrator | 2025-09-19 07:11:03 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:03.615796 | orchestrator | 2025-09-19 07:11:03 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:03.615829 | orchestrator | 2025-09-19 07:11:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:06.657543 | orchestrator | 2025-09-19 07:11:06 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:06.658510 | orchestrator | 2025-09-19 07:11:06 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:06.659638 | orchestrator | 2025-09-19 07:11:06 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:06.659664 | orchestrator | 2025-09-19 07:11:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:09.705648 | orchestrator | 2025-09-19 07:11:09 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:09.705762 | orchestrator | 2025-09-19 07:11:09 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:09.706231 | orchestrator | 2025-09-19 07:11:09 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:09.706271 | orchestrator | 2025-09-19 07:11:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:12.753004 | orchestrator | 2025-09-19 07:11:12 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:12.753456 | orchestrator | 2025-09-19 07:11:12 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:12.756170 | orchestrator | 2025-09-19 07:11:12 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:12.756214 | orchestrator | 2025-09-19 07:11:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:15.801455 | orchestrator | 2025-09-19 07:11:15 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:15.803391 | orchestrator | 2025-09-19 07:11:15 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:15.805225 | orchestrator | 2025-09-19 07:11:15 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:15.805274 | orchestrator | 2025-09-19 07:11:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:18.844540 | orchestrator | 2025-09-19 07:11:18 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:18.846395 | orchestrator | 2025-09-19 07:11:18 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:18.847637 | orchestrator | 2025-09-19 07:11:18 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:18.847663 | orchestrator | 2025-09-19 07:11:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:21.891437 | orchestrator | 2025-09-19 07:11:21 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:21.892744 | orchestrator | 2025-09-19 07:11:21 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:21.894822 | orchestrator | 2025-09-19 07:11:21 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:21.894850 | orchestrator | 2025-09-19 07:11:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:24.934320 | orchestrator | 2025-09-19 07:11:24 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:24.935021 | orchestrator | 2025-09-19 07:11:24 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:24.936241 | orchestrator | 2025-09-19 07:11:24 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:24.936270 | orchestrator | 2025-09-19 07:11:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:27.977228 | orchestrator | 2025-09-19 07:11:27 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:27.978768 | orchestrator | 2025-09-19 07:11:27 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:27.979996 | orchestrator | 2025-09-19 07:11:27 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:27.980031 | orchestrator | 2025-09-19 07:11:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:31.025468 | orchestrator | 2025-09-19 07:11:31 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:31.027499 | orchestrator | 2025-09-19 07:11:31 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:31.029457 | orchestrator | 2025-09-19 07:11:31 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:31.029679 | orchestrator | 2025-09-19 07:11:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:34.065287 | orchestrator | 2025-09-19 07:11:34 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:34.067221 | orchestrator | 2025-09-19 07:11:34 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:34.068830 | orchestrator | 2025-09-19 07:11:34 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:34.068876 | orchestrator | 2025-09-19 07:11:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:37.114522 | orchestrator | 2025-09-19 07:11:37 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:37.117067 | orchestrator | 2025-09-19 07:11:37 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:37.119750 | orchestrator | 2025-09-19 07:11:37 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:37.120158 | orchestrator | 2025-09-19 07:11:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:40.159483 | orchestrator | 2025-09-19 07:11:40 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:40.161194 | orchestrator | 2025-09-19 07:11:40 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:40.163414 | orchestrator | 2025-09-19 07:11:40 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:40.163454 | orchestrator | 2025-09-19 07:11:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:43.207466 | orchestrator | 2025-09-19 07:11:43 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:43.209307 | orchestrator | 2025-09-19 07:11:43 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:43.211381 | orchestrator | 2025-09-19 07:11:43 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:43.211588 | orchestrator | 2025-09-19 07:11:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:46.262419 | orchestrator | 2025-09-19 07:11:46 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:46.263080 | orchestrator | 2025-09-19 07:11:46 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:46.264926 | orchestrator | 2025-09-19 07:11:46 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:46.265009 | orchestrator | 2025-09-19 07:11:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:49.305554 | orchestrator | 2025-09-19 07:11:49 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:49.307548 | orchestrator | 2025-09-19 07:11:49 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:49.309964 | orchestrator | 2025-09-19 07:11:49 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:49.310001 | orchestrator | 2025-09-19 07:11:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:52.353068 | orchestrator | 2025-09-19 07:11:52 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state STARTED 2025-09-19 07:11:52.354472 | orchestrator | 2025-09-19 07:11:52 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:52.356936 | orchestrator | 2025-09-19 07:11:52 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:52.357157 | orchestrator | 2025-09-19 07:11:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:55.398866 | orchestrator | 2025-09-19 07:11:55 | INFO  | Task a0949288-5b2f-4732-bb47-356f652a548f is in state SUCCESS 2025-09-19 07:11:55.400323 | orchestrator | 2025-09-19 07:11:55.400368 | orchestrator | 2025-09-19 07:11:55.400382 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:11:55.400395 | orchestrator | 2025-09-19 07:11:55.400535 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:11:55.400551 | orchestrator | Friday 19 September 2025 07:09:02 +0000 (0:00:00.231) 0:00:00.231 ****** 2025-09-19 07:11:55.400562 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:11:55.400575 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:11:55.400586 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:11:55.400597 | orchestrator | 2025-09-19 07:11:55.400608 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:11:55.400648 | orchestrator | Friday 19 September 2025 07:09:02 +0000 (0:00:00.265) 0:00:00.497 ****** 2025-09-19 07:11:55.400661 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-19 07:11:55.400672 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-19 07:11:55.400683 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-19 07:11:55.400694 | orchestrator | 2025-09-19 07:11:55.400706 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-19 07:11:55.400716 | orchestrator | 2025-09-19 07:11:55.400727 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 07:11:55.400738 | orchestrator | Friday 19 September 2025 07:09:02 +0000 (0:00:00.363) 0:00:00.861 ****** 2025-09-19 07:11:55.400749 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:11:55.400760 | orchestrator | 2025-09-19 07:11:55.400771 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-19 07:11:55.400782 | orchestrator | Friday 19 September 2025 07:09:03 +0000 (0:00:00.443) 0:00:01.305 ****** 2025-09-19 07:11:55.400793 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 07:11:55.400804 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 07:11:55.400815 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 07:11:55.400826 | orchestrator | 2025-09-19 07:11:55.400837 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-19 07:11:55.400848 | orchestrator | Friday 19 September 2025 07:09:04 +0000 (0:00:00.646) 0:00:01.951 ****** 2025-09-19 07:11:55.400877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.400932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.400956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.400982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.400997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.401016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.401029 | orchestrator | 2025-09-19 07:11:55.401040 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 07:11:55.401052 | orchestrator | Friday 19 September 2025 07:09:05 +0000 (0:00:01.573) 0:00:03.524 ****** 2025-09-19 07:11:55.401063 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:11:55.401075 | orchestrator | 2025-09-19 07:11:55.401086 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-19 07:11:55.401104 | orchestrator | Friday 19 September 2025 07:09:06 +0000 (0:00:00.527) 0:00:04.052 ****** 2025-09-19 07:11:55.401126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.401139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.401151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.401169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.401192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.401215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.401229 | orchestrator | 2025-09-19 07:11:55.401242 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-19 07:11:55.401254 | orchestrator | Friday 19 September 2025 07:09:08 +0000 (0:00:02.487) 0:00:06.539 ****** 2025-09-19 07:11:55.401379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:11:55.401401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:11:55.401423 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:11:55.401444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:11:55.401457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:11:55.401469 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:11:55.401480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:11:55.401497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:11:55.401519 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:11:55.401530 | orchestrator | 2025-09-19 07:11:55.401541 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-19 07:11:55.401553 | orchestrator | Friday 19 September 2025 07:09:09 +0000 (0:00:01.140) 0:00:07.680 ****** 2025-09-19 07:11:55.401572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:11:55.401585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:11:55.401597 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:11:55.401608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:11:55.401625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:11:55.401648 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:11:55.401665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:11:55.401678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:11:55.401690 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:11:55.401701 | orchestrator | 2025-09-19 07:11:55.401712 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-19 07:11:55.401723 | orchestrator | Friday 19 September 2025 07:09:10 +0000 (0:00:01.074) 0:00:08.754 ****** 2025-09-19 07:11:55.401735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.401752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.401776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.401795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.401808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.401826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.401845 | orchestrator | 2025-09-19 07:11:55.401857 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-19 07:11:55.401868 | orchestrator | Friday 19 September 2025 07:09:13 +0000 (0:00:02.404) 0:00:11.159 ****** 2025-09-19 07:11:55.401879 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:11:55.401914 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:11:55.401925 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:11:55.401936 | orchestrator | 2025-09-19 07:11:55.401947 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-19 07:11:55.401958 | orchestrator | Friday 19 September 2025 07:09:16 +0000 (0:00:03.432) 0:00:14.591 ****** 2025-09-19 07:11:55.401969 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:11:55.401980 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:11:55.401990 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:11:55.402001 | orchestrator | 2025-09-19 07:11:55.402012 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-19 07:11:55.402082 | orchestrator | Friday 19 September 2025 07:09:18 +0000 (0:00:01.766) 0:00:16.357 ****** 2025-09-19 07:11:55.402105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.402117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.402129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:11:55.402166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.402187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.402200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:11:55.402212 | orchestrator | 2025-09-19 07:11:55.402223 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 07:11:55.402234 | orchestrator | Friday 19 September 2025 07:09:20 +0000 (0:00:02.148) 0:00:18.506 ****** 2025-09-19 07:11:55.402245 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:11:55.402256 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:11:55.402267 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:11:55.402278 | orchestrator | 2025-09-19 07:11:55.402289 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 07:11:55.402306 | orchestrator | Friday 19 September 2025 07:09:20 +0000 (0:00:00.242) 0:00:18.748 ****** 2025-09-19 07:11:55.402323 | orchestrator | 2025-09-19 07:11:55.402341 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 07:11:55.402360 | orchestrator | Friday 19 September 2025 07:09:20 +0000 (0:00:00.064) 0:00:18.813 ****** 2025-09-19 07:11:55.402378 | orchestrator | 2025-09-19 07:11:55.402398 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 07:11:55.402416 | orchestrator | Friday 19 September 2025 07:09:21 +0000 (0:00:00.073) 0:00:18.887 ****** 2025-09-19 07:11:55.402433 | orchestrator | 2025-09-19 07:11:55.402453 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-19 07:11:55.402471 | orchestrator | Friday 19 September 2025 07:09:21 +0000 (0:00:00.167) 0:00:19.054 ****** 2025-09-19 07:11:55.402490 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:11:55.402510 | orchestrator | 2025-09-19 07:11:55.402539 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-19 07:11:55.402558 | orchestrator | Friday 19 September 2025 07:09:21 +0000 (0:00:00.207) 0:00:19.262 ****** 2025-09-19 07:11:55.402576 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:11:55.402594 | orchestrator | 2025-09-19 07:11:55.402612 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-19 07:11:55.402631 | orchestrator | Friday 19 September 2025 07:09:21 +0000 (0:00:00.189) 0:00:19.452 ****** 2025-09-19 07:11:55.402649 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:11:55.402667 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:11:55.402684 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:11:55.402702 | orchestrator | 2025-09-19 07:11:55.402721 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-19 07:11:55.402739 | orchestrator | Friday 19 September 2025 07:10:27 +0000 (0:01:05.645) 0:01:25.097 ****** 2025-09-19 07:11:55.402758 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:11:55.402775 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:11:55.402793 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:11:55.402810 | orchestrator | 2025-09-19 07:11:55.402828 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 07:11:55.402846 | orchestrator | Friday 19 September 2025 07:11:44 +0000 (0:01:17.146) 0:02:42.244 ****** 2025-09-19 07:11:55.402863 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:11:55.402881 | orchestrator | 2025-09-19 07:11:55.402976 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-19 07:11:55.402995 | orchestrator | Friday 19 September 2025 07:11:44 +0000 (0:00:00.566) 0:02:42.810 ****** 2025-09-19 07:11:55.403013 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:11:55.403033 | orchestrator | 2025-09-19 07:11:55.403053 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-19 07:11:55.403250 | orchestrator | Friday 19 September 2025 07:11:47 +0000 (0:00:02.427) 0:02:45.238 ****** 2025-09-19 07:11:55.403385 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:11:55.403403 | orchestrator | 2025-09-19 07:11:55.403418 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-19 07:11:55.403434 | orchestrator | Friday 19 September 2025 07:11:49 +0000 (0:00:02.313) 0:02:47.551 ****** 2025-09-19 07:11:55.403450 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:11:55.403465 | orchestrator | 2025-09-19 07:11:55.403483 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-19 07:11:55.403500 | orchestrator | Friday 19 September 2025 07:11:52 +0000 (0:00:02.697) 0:02:50.249 ****** 2025-09-19 07:11:55.403517 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:11:55.403533 | orchestrator | 2025-09-19 07:11:55.403567 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:11:55.403585 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:11:55.403622 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:11:55.403638 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:11:55.403654 | orchestrator | 2025-09-19 07:11:55.403670 | orchestrator | 2025-09-19 07:11:55.403685 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:11:55.403701 | orchestrator | Friday 19 September 2025 07:11:54 +0000 (0:00:02.544) 0:02:52.794 ****** 2025-09-19 07:11:55.403716 | orchestrator | =============================================================================== 2025-09-19 07:11:55.403730 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 77.15s 2025-09-19 07:11:55.403745 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.65s 2025-09-19 07:11:55.403760 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.43s 2025-09-19 07:11:55.403775 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.70s 2025-09-19 07:11:55.403791 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.54s 2025-09-19 07:11:55.403808 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.49s 2025-09-19 07:11:55.403824 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.43s 2025-09-19 07:11:55.403840 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.40s 2025-09-19 07:11:55.403856 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.31s 2025-09-19 07:11:55.403873 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.15s 2025-09-19 07:11:55.403956 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.77s 2025-09-19 07:11:55.403969 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.57s 2025-09-19 07:11:55.403980 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.14s 2025-09-19 07:11:55.403989 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.07s 2025-09-19 07:11:55.403999 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.65s 2025-09-19 07:11:55.404009 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-09-19 07:11:55.404019 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-09-19 07:11:55.404029 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2025-09-19 07:11:55.404048 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2025-09-19 07:11:55.404058 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.31s 2025-09-19 07:11:55.404068 | orchestrator | 2025-09-19 07:11:55 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:55.404078 | orchestrator | 2025-09-19 07:11:55 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:55.404088 | orchestrator | 2025-09-19 07:11:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:58.438566 | orchestrator | 2025-09-19 07:11:58 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:11:58.439763 | orchestrator | 2025-09-19 07:11:58 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:11:58.439803 | orchestrator | 2025-09-19 07:11:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:01.486874 | orchestrator | 2025-09-19 07:12:01 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:12:01.488339 | orchestrator | 2025-09-19 07:12:01 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:01.488411 | orchestrator | 2025-09-19 07:12:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:04.539630 | orchestrator | 2025-09-19 07:12:04 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:12:04.541684 | orchestrator | 2025-09-19 07:12:04 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:04.541708 | orchestrator | 2025-09-19 07:12:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:07.584347 | orchestrator | 2025-09-19 07:12:07 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state STARTED 2025-09-19 07:12:07.586251 | orchestrator | 2025-09-19 07:12:07 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:07.586449 | orchestrator | 2025-09-19 07:12:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:10.642082 | orchestrator | 2025-09-19 07:12:10.642183 | orchestrator | 2025-09-19 07:12:10.642199 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-19 07:12:10.642211 | orchestrator | 2025-09-19 07:12:10.642221 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 07:12:10.642232 | orchestrator | Friday 19 September 2025 07:09:02 +0000 (0:00:00.092) 0:00:00.092 ****** 2025-09-19 07:12:10.642242 | orchestrator | ok: [localhost] => { 2025-09-19 07:12:10.642254 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-19 07:12:10.642265 | orchestrator | } 2025-09-19 07:12:10.642275 | orchestrator | 2025-09-19 07:12:10.642286 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-19 07:12:10.642296 | orchestrator | Friday 19 September 2025 07:09:02 +0000 (0:00:00.047) 0:00:00.139 ****** 2025-09-19 07:12:10.642306 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-19 07:12:10.642318 | orchestrator | ...ignoring 2025-09-19 07:12:10.642348 | orchestrator | 2025-09-19 07:12:10.642359 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-19 07:12:10.642369 | orchestrator | Friday 19 September 2025 07:09:05 +0000 (0:00:02.734) 0:00:02.873 ****** 2025-09-19 07:12:10.642379 | orchestrator | skipping: [localhost] 2025-09-19 07:12:10.642389 | orchestrator | 2025-09-19 07:12:10.642399 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-19 07:12:10.642409 | orchestrator | Friday 19 September 2025 07:09:05 +0000 (0:00:00.054) 0:00:02.928 ****** 2025-09-19 07:12:10.642419 | orchestrator | ok: [localhost] 2025-09-19 07:12:10.642429 | orchestrator | 2025-09-19 07:12:10.642439 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:12:10.642449 | orchestrator | 2025-09-19 07:12:10.642459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:12:10.642469 | orchestrator | Friday 19 September 2025 07:09:05 +0000 (0:00:00.173) 0:00:03.102 ****** 2025-09-19 07:12:10.642479 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.642488 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:10.642499 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:10.642509 | orchestrator | 2025-09-19 07:12:10.642519 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:12:10.642529 | orchestrator | Friday 19 September 2025 07:09:05 +0000 (0:00:00.346) 0:00:03.448 ****** 2025-09-19 07:12:10.642539 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 07:12:10.642549 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 07:12:10.642559 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 07:12:10.642569 | orchestrator | 2025-09-19 07:12:10.642581 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 07:12:10.642592 | orchestrator | 2025-09-19 07:12:10.642602 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 07:12:10.642635 | orchestrator | Friday 19 September 2025 07:09:06 +0000 (0:00:00.673) 0:00:04.122 ****** 2025-09-19 07:12:10.642646 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:12:10.642657 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 07:12:10.642669 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 07:12:10.642680 | orchestrator | 2025-09-19 07:12:10.642704 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 07:12:10.642716 | orchestrator | Friday 19 September 2025 07:09:06 +0000 (0:00:00.359) 0:00:04.482 ****** 2025-09-19 07:12:10.642727 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:10.642739 | orchestrator | 2025-09-19 07:12:10.642751 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-19 07:12:10.642762 | orchestrator | Friday 19 September 2025 07:09:07 +0000 (0:00:00.522) 0:00:05.004 ****** 2025-09-19 07:12:10.642799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:12:10.642817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:12:10.642842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:12:10.642855 | orchestrator | 2025-09-19 07:12:10.642909 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-19 07:12:10.642922 | orchestrator | Friday 19 September 2025 07:09:10 +0000 (0:00:03.288) 0:00:08.293 ****** 2025-09-19 07:12:10.642934 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.642945 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.642957 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.642967 | orchestrator | 2025-09-19 07:12:10.642977 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-19 07:12:10.642987 | orchestrator | Friday 19 September 2025 07:09:11 +0000 (0:00:00.789) 0:00:09.083 ****** 2025-09-19 07:12:10.642997 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.643007 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.643017 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.643027 | orchestrator | 2025-09-19 07:12:10.643037 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-19 07:12:10.643047 | orchestrator | Friday 19 September 2025 07:09:12 +0000 (0:00:01.501) 0:00:10.584 ****** 2025-09-19 07:12:10.643063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:12:10.643089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:12:10.643101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:12:10.643119 | orchestrator | 2025-09-19 07:12:10.643129 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-19 07:12:10.643139 | orchestrator | Friday 19 September 2025 07:09:16 +0000 (0:00:03.924) 0:00:14.509 ****** 2025-09-19 07:12:10.643154 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.643164 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.643174 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.643184 | orchestrator | 2025-09-19 07:12:10.643194 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-19 07:12:10.643204 | orchestrator | Friday 19 September 2025 07:09:17 +0000 (0:00:01.095) 0:00:15.604 ****** 2025-09-19 07:12:10.643214 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.643224 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:10.643234 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:10.643244 | orchestrator | 2025-09-19 07:12:10.643253 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 07:12:10.643263 | orchestrator | Friday 19 September 2025 07:09:21 +0000 (0:00:03.796) 0:00:19.401 ****** 2025-09-19 07:12:10.643273 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:10.643284 | orchestrator | 2025-09-19 07:12:10.643294 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 07:12:10.643304 | orchestrator | Friday 19 September 2025 07:09:22 +0000 (0:00:00.454) 0:00:19.855 ****** 2025-09-19 07:12:10.643323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:12:10.643341 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.643357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:12:10.643368 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.643386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:12:10.643404 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.643414 | orchestrator | 2025-09-19 07:12:10.643423 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 07:12:10.643433 | orchestrator | Friday 19 September 2025 07:09:24 +0000 (0:00:02.938) 0:00:22.794 ****** 2025-09-19 07:12:10.643448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:12:10.643459 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.643477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:12:10.643493 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.643504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:12:10.643515 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.643525 | orchestrator | 2025-09-19 07:12:10.643539 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 07:12:10.643549 | orchestrator | Friday 19 September 2025 07:09:27 +0000 (0:00:02.643) 0:00:25.438 ****** 2025-09-19 07:12:10.643565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:12:10.643590 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.643601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:12:10.643612 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.643627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:12:10.643644 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.643654 | orchestrator | 2025-09-19 07:12:10.643664 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-19 07:12:10.643674 | orchestrator | Friday 19 September 2025 07:09:30 +0000 (0:00:03.368) 0:00:28.806 ****** 2025-09-19 07:12:10.643692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:12:10.643709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:12:10.643729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:12:10.643746 | orchestrator | 2025-09-19 07:12:10.643756 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-19 07:12:10.643766 | orchestrator | Friday 19 September 2025 07:09:34 +0000 (0:00:03.349) 0:00:32.156 ****** 2025-09-19 07:12:10.643776 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.643786 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:10.643796 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:10.643806 | orchestrator | 2025-09-19 07:12:10.643816 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-19 07:12:10.643826 | orchestrator | Friday 19 September 2025 07:09:35 +0000 (0:00:01.066) 0:00:33.222 ****** 2025-09-19 07:12:10.643835 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.643845 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:10.643855 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:10.643865 | orchestrator | 2025-09-19 07:12:10.643891 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-19 07:12:10.643902 | orchestrator | Friday 19 September 2025 07:09:35 +0000 (0:00:00.359) 0:00:33.582 ****** 2025-09-19 07:12:10.643912 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.643922 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:10.643931 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:10.643941 | orchestrator | 2025-09-19 07:12:10.643951 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-19 07:12:10.643966 | orchestrator | Friday 19 September 2025 07:09:36 +0000 (0:00:00.390) 0:00:33.972 ****** 2025-09-19 07:12:10.643977 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-19 07:12:10.643987 | orchestrator | ...ignoring 2025-09-19 07:12:10.643998 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-19 07:12:10.644008 | orchestrator | ...ignoring 2025-09-19 07:12:10.644018 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-19 07:12:10.644028 | orchestrator | ...ignoring 2025-09-19 07:12:10.644045 | orchestrator | 2025-09-19 07:12:10.644055 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-19 07:12:10.644065 | orchestrator | Friday 19 September 2025 07:09:47 +0000 (0:00:10.872) 0:00:44.845 ****** 2025-09-19 07:12:10.644075 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.644085 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:10.644095 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:10.644105 | orchestrator | 2025-09-19 07:12:10.644115 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-19 07:12:10.644125 | orchestrator | Friday 19 September 2025 07:09:47 +0000 (0:00:00.880) 0:00:45.726 ****** 2025-09-19 07:12:10.644134 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.644144 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.644154 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.644164 | orchestrator | 2025-09-19 07:12:10.644174 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-19 07:12:10.644184 | orchestrator | Friday 19 September 2025 07:09:48 +0000 (0:00:00.441) 0:00:46.167 ****** 2025-09-19 07:12:10.644194 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.644204 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.644214 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.644224 | orchestrator | 2025-09-19 07:12:10.644234 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-19 07:12:10.644244 | orchestrator | Friday 19 September 2025 07:09:48 +0000 (0:00:00.430) 0:00:46.598 ****** 2025-09-19 07:12:10.644254 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.644264 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.644273 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.644283 | orchestrator | 2025-09-19 07:12:10.644293 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-19 07:12:10.644309 | orchestrator | Friday 19 September 2025 07:09:49 +0000 (0:00:00.417) 0:00:47.016 ****** 2025-09-19 07:12:10.644319 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.644329 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:10.644339 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:10.644349 | orchestrator | 2025-09-19 07:12:10.644359 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-19 07:12:10.644369 | orchestrator | Friday 19 September 2025 07:09:49 +0000 (0:00:00.492) 0:00:47.508 ****** 2025-09-19 07:12:10.644379 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.644389 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.644398 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.644408 | orchestrator | 2025-09-19 07:12:10.644418 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 07:12:10.644428 | orchestrator | Friday 19 September 2025 07:09:50 +0000 (0:00:00.375) 0:00:47.884 ****** 2025-09-19 07:12:10.644438 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.644448 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.644458 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-19 07:12:10.644467 | orchestrator | 2025-09-19 07:12:10.644477 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-19 07:12:10.644487 | orchestrator | Friday 19 September 2025 07:09:50 +0000 (0:00:00.353) 0:00:48.238 ****** 2025-09-19 07:12:10.644497 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.644507 | orchestrator | 2025-09-19 07:12:10.644517 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-19 07:12:10.644526 | orchestrator | Friday 19 September 2025 07:10:00 +0000 (0:00:10.201) 0:00:58.439 ****** 2025-09-19 07:12:10.644536 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.644546 | orchestrator | 2025-09-19 07:12:10.644556 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 07:12:10.644566 | orchestrator | Friday 19 September 2025 07:10:00 +0000 (0:00:00.097) 0:00:58.537 ****** 2025-09-19 07:12:10.644576 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.644592 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.644602 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.644612 | orchestrator | 2025-09-19 07:12:10.644622 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-19 07:12:10.644632 | orchestrator | Friday 19 September 2025 07:10:01 +0000 (0:00:00.822) 0:00:59.359 ****** 2025-09-19 07:12:10.644641 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.644651 | orchestrator | 2025-09-19 07:12:10.644661 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-19 07:12:10.644671 | orchestrator | Friday 19 September 2025 07:10:08 +0000 (0:00:06.932) 0:01:06.292 ****** 2025-09-19 07:12:10.644681 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.644691 | orchestrator | 2025-09-19 07:12:10.644701 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-19 07:12:10.644711 | orchestrator | Friday 19 September 2025 07:10:10 +0000 (0:00:02.519) 0:01:08.811 ****** 2025-09-19 07:12:10.644721 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.644731 | orchestrator | 2025-09-19 07:12:10.644741 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-19 07:12:10.644750 | orchestrator | Friday 19 September 2025 07:10:13 +0000 (0:00:02.557) 0:01:11.368 ****** 2025-09-19 07:12:10.644760 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.644770 | orchestrator | 2025-09-19 07:12:10.644784 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-19 07:12:10.644795 | orchestrator | Friday 19 September 2025 07:10:13 +0000 (0:00:00.126) 0:01:11.495 ****** 2025-09-19 07:12:10.644805 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.644814 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.644824 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.644834 | orchestrator | 2025-09-19 07:12:10.644844 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-19 07:12:10.644854 | orchestrator | Friday 19 September 2025 07:10:14 +0000 (0:00:00.501) 0:01:11.997 ****** 2025-09-19 07:12:10.644864 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.644923 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 07:12:10.644935 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:10.644945 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:10.644955 | orchestrator | 2025-09-19 07:12:10.644965 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 07:12:10.644975 | orchestrator | skipping: no hosts matched 2025-09-19 07:12:10.644984 | orchestrator | 2025-09-19 07:12:10.644994 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 07:12:10.645004 | orchestrator | 2025-09-19 07:12:10.645014 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 07:12:10.645024 | orchestrator | Friday 19 September 2025 07:10:14 +0000 (0:00:00.367) 0:01:12.364 ****** 2025-09-19 07:12:10.645034 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:10.645044 | orchestrator | 2025-09-19 07:12:10.645054 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 07:12:10.645064 | orchestrator | Friday 19 September 2025 07:10:33 +0000 (0:00:18.600) 0:01:30.965 ****** 2025-09-19 07:12:10.645073 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:10.645083 | orchestrator | 2025-09-19 07:12:10.645093 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 07:12:10.645103 | orchestrator | Friday 19 September 2025 07:10:54 +0000 (0:00:21.578) 0:01:52.543 ****** 2025-09-19 07:12:10.645113 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:10.645123 | orchestrator | 2025-09-19 07:12:10.645133 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 07:12:10.645143 | orchestrator | 2025-09-19 07:12:10.645153 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 07:12:10.645163 | orchestrator | Friday 19 September 2025 07:10:57 +0000 (0:00:02.488) 0:01:55.031 ****** 2025-09-19 07:12:10.645179 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:10.645189 | orchestrator | 2025-09-19 07:12:10.645199 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 07:12:10.645216 | orchestrator | Friday 19 September 2025 07:11:16 +0000 (0:00:18.833) 0:02:13.865 ****** 2025-09-19 07:12:10.645226 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:10.645236 | orchestrator | 2025-09-19 07:12:10.645246 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 07:12:10.645256 | orchestrator | Friday 19 September 2025 07:11:36 +0000 (0:00:20.623) 0:02:34.489 ****** 2025-09-19 07:12:10.645266 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:10.645276 | orchestrator | 2025-09-19 07:12:10.645285 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 07:12:10.645295 | orchestrator | 2025-09-19 07:12:10.645305 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 07:12:10.645315 | orchestrator | Friday 19 September 2025 07:11:39 +0000 (0:00:02.685) 0:02:37.174 ****** 2025-09-19 07:12:10.645325 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.645335 | orchestrator | 2025-09-19 07:12:10.645345 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 07:12:10.645355 | orchestrator | Friday 19 September 2025 07:11:49 +0000 (0:00:10.641) 0:02:47.816 ****** 2025-09-19 07:12:10.645365 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.645375 | orchestrator | 2025-09-19 07:12:10.645384 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 07:12:10.645394 | orchestrator | Friday 19 September 2025 07:11:55 +0000 (0:00:05.566) 0:02:53.383 ****** 2025-09-19 07:12:10.645404 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.645414 | orchestrator | 2025-09-19 07:12:10.645424 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 07:12:10.645434 | orchestrator | 2025-09-19 07:12:10.645444 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 07:12:10.645454 | orchestrator | Friday 19 September 2025 07:11:57 +0000 (0:00:02.241) 0:02:55.624 ****** 2025-09-19 07:12:10.645464 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:10.645473 | orchestrator | 2025-09-19 07:12:10.645483 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-19 07:12:10.645493 | orchestrator | Friday 19 September 2025 07:11:58 +0000 (0:00:00.467) 0:02:56.092 ****** 2025-09-19 07:12:10.645503 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.645513 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.645523 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.645533 | orchestrator | 2025-09-19 07:12:10.645543 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-19 07:12:10.645552 | orchestrator | Friday 19 September 2025 07:12:00 +0000 (0:00:02.375) 0:02:58.467 ****** 2025-09-19 07:12:10.645562 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.645572 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.645582 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.645592 | orchestrator | 2025-09-19 07:12:10.645602 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-19 07:12:10.645612 | orchestrator | Friday 19 September 2025 07:12:02 +0000 (0:00:02.135) 0:03:00.603 ****** 2025-09-19 07:12:10.645622 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.645631 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.645641 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.645651 | orchestrator | 2025-09-19 07:12:10.645661 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-19 07:12:10.645671 | orchestrator | Friday 19 September 2025 07:12:04 +0000 (0:00:02.152) 0:03:02.755 ****** 2025-09-19 07:12:10.645681 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.645696 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.645706 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:10.645722 | orchestrator | 2025-09-19 07:12:10.645732 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-19 07:12:10.645742 | orchestrator | Friday 19 September 2025 07:12:07 +0000 (0:00:02.333) 0:03:05.089 ****** 2025-09-19 07:12:10.645752 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:10.645762 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:10.645772 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:10.645782 | orchestrator | 2025-09-19 07:12:10.645792 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 07:12:10.645802 | orchestrator | Friday 19 September 2025 07:12:10 +0000 (0:00:02.746) 0:03:07.835 ****** 2025-09-19 07:12:10.645812 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:10.645822 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:10.645832 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:10.645841 | orchestrator | 2025-09-19 07:12:10.645851 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:12:10.645861 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 07:12:10.645914 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-19 07:12:10.645928 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 07:12:10.645938 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 07:12:10.645948 | orchestrator | 2025-09-19 07:12:10.645958 | orchestrator | 2025-09-19 07:12:10.645968 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:12:10.645978 | orchestrator | Friday 19 September 2025 07:12:10 +0000 (0:00:00.205) 0:03:08.041 ****** 2025-09-19 07:12:10.645988 | orchestrator | =============================================================================== 2025-09-19 07:12:10.645998 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 42.20s 2025-09-19 07:12:10.646008 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.43s 2025-09-19 07:12:10.646064 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.87s 2025-09-19 07:12:10.646078 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.64s 2025-09-19 07:12:10.646088 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.20s 2025-09-19 07:12:10.646098 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.93s 2025-09-19 07:12:10.646108 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.57s 2025-09-19 07:12:10.646118 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.17s 2025-09-19 07:12:10.646128 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.92s 2025-09-19 07:12:10.646137 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.80s 2025-09-19 07:12:10.646147 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.37s 2025-09-19 07:12:10.646157 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.35s 2025-09-19 07:12:10.646167 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.29s 2025-09-19 07:12:10.646177 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.94s 2025-09-19 07:12:10.646187 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.75s 2025-09-19 07:12:10.646197 | orchestrator | Check MariaDB service --------------------------------------------------- 2.73s 2025-09-19 07:12:10.646207 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.64s 2025-09-19 07:12:10.646217 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.56s 2025-09-19 07:12:10.646235 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.52s 2025-09-19 07:12:10.646245 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.38s 2025-09-19 07:12:10.646255 | orchestrator | 2025-09-19 07:12:10 | INFO  | Task 9da61f18-6117-471c-b4c0-15a4a195411c is in state SUCCESS 2025-09-19 07:12:10.646265 | orchestrator | 2025-09-19 07:12:10 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:10.646276 | orchestrator | 2025-09-19 07:12:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:13.691457 | orchestrator | 2025-09-19 07:12:13 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:13.692967 | orchestrator | 2025-09-19 07:12:13 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:13.695465 | orchestrator | 2025-09-19 07:12:13 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:13.695711 | orchestrator | 2025-09-19 07:12:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:16.741009 | orchestrator | 2025-09-19 07:12:16 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:16.744637 | orchestrator | 2025-09-19 07:12:16 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:16.746940 | orchestrator | 2025-09-19 07:12:16 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:16.746974 | orchestrator | 2025-09-19 07:12:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:19.791350 | orchestrator | 2025-09-19 07:12:19 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:19.791978 | orchestrator | 2025-09-19 07:12:19 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:19.793753 | orchestrator | 2025-09-19 07:12:19 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:19.794164 | orchestrator | 2025-09-19 07:12:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:22.831808 | orchestrator | 2025-09-19 07:12:22 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:22.832273 | orchestrator | 2025-09-19 07:12:22 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:22.833588 | orchestrator | 2025-09-19 07:12:22 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:22.833616 | orchestrator | 2025-09-19 07:12:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:25.876352 | orchestrator | 2025-09-19 07:12:25 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:25.877533 | orchestrator | 2025-09-19 07:12:25 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:25.879557 | orchestrator | 2025-09-19 07:12:25 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:25.880045 | orchestrator | 2025-09-19 07:12:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:28.923200 | orchestrator | 2025-09-19 07:12:28 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:28.924184 | orchestrator | 2025-09-19 07:12:28 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:28.924996 | orchestrator | 2025-09-19 07:12:28 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:28.925626 | orchestrator | 2025-09-19 07:12:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:31.957326 | orchestrator | 2025-09-19 07:12:31 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:31.959771 | orchestrator | 2025-09-19 07:12:31 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:31.961134 | orchestrator | 2025-09-19 07:12:31 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:31.961161 | orchestrator | 2025-09-19 07:12:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:34.996034 | orchestrator | 2025-09-19 07:12:34 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:34.996142 | orchestrator | 2025-09-19 07:12:34 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:34.996607 | orchestrator | 2025-09-19 07:12:34 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:34.996636 | orchestrator | 2025-09-19 07:12:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:38.035294 | orchestrator | 2025-09-19 07:12:38 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:38.035387 | orchestrator | 2025-09-19 07:12:38 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:38.035401 | orchestrator | 2025-09-19 07:12:38 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:38.035414 | orchestrator | 2025-09-19 07:12:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:41.068602 | orchestrator | 2025-09-19 07:12:41 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:41.069474 | orchestrator | 2025-09-19 07:12:41 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:41.070823 | orchestrator | 2025-09-19 07:12:41 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:41.070975 | orchestrator | 2025-09-19 07:12:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:44.108073 | orchestrator | 2025-09-19 07:12:44 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:44.109518 | orchestrator | 2025-09-19 07:12:44 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:44.111575 | orchestrator | 2025-09-19 07:12:44 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:44.112169 | orchestrator | 2025-09-19 07:12:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:47.156241 | orchestrator | 2025-09-19 07:12:47 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:47.157429 | orchestrator | 2025-09-19 07:12:47 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:47.159687 | orchestrator | 2025-09-19 07:12:47 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:47.159777 | orchestrator | 2025-09-19 07:12:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:50.212711 | orchestrator | 2025-09-19 07:12:50 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:50.215956 | orchestrator | 2025-09-19 07:12:50 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:50.219798 | orchestrator | 2025-09-19 07:12:50 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:50.219873 | orchestrator | 2025-09-19 07:12:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:53.264360 | orchestrator | 2025-09-19 07:12:53 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:53.264987 | orchestrator | 2025-09-19 07:12:53 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:53.265638 | orchestrator | 2025-09-19 07:12:53 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:53.265662 | orchestrator | 2025-09-19 07:12:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:56.298758 | orchestrator | 2025-09-19 07:12:56 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:56.298911 | orchestrator | 2025-09-19 07:12:56 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state STARTED 2025-09-19 07:12:56.298929 | orchestrator | 2025-09-19 07:12:56 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:56.298941 | orchestrator | 2025-09-19 07:12:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:59.345094 | orchestrator | 2025-09-19 07:12:59 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:12:59.348921 | orchestrator | 2025-09-19 07:12:59 | INFO  | Task 9880bfe3-6176-46ea-bc08-55d07e3c6826 is in state SUCCESS 2025-09-19 07:12:59.351176 | orchestrator | 2025-09-19 07:12:59.351309 | orchestrator | 2025-09-19 07:12:59.351326 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-19 07:12:59.351339 | orchestrator | 2025-09-19 07:12:59.351350 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 07:12:59.351362 | orchestrator | Friday 19 September 2025 07:10:50 +0000 (0:00:00.624) 0:00:00.624 ****** 2025-09-19 07:12:59.351453 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:59.351467 | orchestrator | 2025-09-19 07:12:59.351479 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 07:12:59.351490 | orchestrator | Friday 19 September 2025 07:10:51 +0000 (0:00:00.653) 0:00:01.277 ****** 2025-09-19 07:12:59.351502 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.351566 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.352145 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.352166 | orchestrator | 2025-09-19 07:12:59.352178 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 07:12:59.352190 | orchestrator | Friday 19 September 2025 07:10:51 +0000 (0:00:00.599) 0:00:01.876 ****** 2025-09-19 07:12:59.352201 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.352213 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.352224 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.352236 | orchestrator | 2025-09-19 07:12:59.352247 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 07:12:59.352259 | orchestrator | Friday 19 September 2025 07:10:51 +0000 (0:00:00.276) 0:00:02.153 ****** 2025-09-19 07:12:59.352270 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.352281 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.352293 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.352304 | orchestrator | 2025-09-19 07:12:59.352315 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 07:12:59.352327 | orchestrator | Friday 19 September 2025 07:10:52 +0000 (0:00:00.767) 0:00:02.920 ****** 2025-09-19 07:12:59.352338 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.352350 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.352361 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.352372 | orchestrator | 2025-09-19 07:12:59.352384 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 07:12:59.352396 | orchestrator | Friday 19 September 2025 07:10:53 +0000 (0:00:00.296) 0:00:03.217 ****** 2025-09-19 07:12:59.352407 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.352419 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.352457 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.352488 | orchestrator | 2025-09-19 07:12:59.352514 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 07:12:59.352526 | orchestrator | Friday 19 September 2025 07:10:53 +0000 (0:00:00.300) 0:00:03.517 ****** 2025-09-19 07:12:59.352537 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.352548 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.352560 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.352571 | orchestrator | 2025-09-19 07:12:59.352582 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 07:12:59.352594 | orchestrator | Friday 19 September 2025 07:10:53 +0000 (0:00:00.304) 0:00:03.821 ****** 2025-09-19 07:12:59.352605 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.352617 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.352628 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.352640 | orchestrator | 2025-09-19 07:12:59.352651 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 07:12:59.352662 | orchestrator | Friday 19 September 2025 07:10:54 +0000 (0:00:00.493) 0:00:04.315 ****** 2025-09-19 07:12:59.352673 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.352685 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.352696 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.352707 | orchestrator | 2025-09-19 07:12:59.352718 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 07:12:59.352729 | orchestrator | Friday 19 September 2025 07:10:54 +0000 (0:00:00.295) 0:00:04.611 ****** 2025-09-19 07:12:59.352741 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:12:59.352754 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:12:59.352766 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:12:59.352778 | orchestrator | 2025-09-19 07:12:59.352791 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 07:12:59.352804 | orchestrator | Friday 19 September 2025 07:10:55 +0000 (0:00:00.629) 0:00:05.241 ****** 2025-09-19 07:12:59.352818 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.352855 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.352868 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.352881 | orchestrator | 2025-09-19 07:12:59.352894 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 07:12:59.352906 | orchestrator | Friday 19 September 2025 07:10:55 +0000 (0:00:00.448) 0:00:05.690 ****** 2025-09-19 07:12:59.352918 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:12:59.352931 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:12:59.352944 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:12:59.352957 | orchestrator | 2025-09-19 07:12:59.352969 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 07:12:59.352982 | orchestrator | Friday 19 September 2025 07:10:57 +0000 (0:00:02.219) 0:00:07.909 ****** 2025-09-19 07:12:59.352995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 07:12:59.353009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 07:12:59.353022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 07:12:59.353035 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.353047 | orchestrator | 2025-09-19 07:12:59.353060 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 07:12:59.353119 | orchestrator | Friday 19 September 2025 07:10:58 +0000 (0:00:00.407) 0:00:08.317 ****** 2025-09-19 07:12:59.353135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.353160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.353172 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.353183 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.353195 | orchestrator | 2025-09-19 07:12:59.353206 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 07:12:59.353218 | orchestrator | Friday 19 September 2025 07:10:58 +0000 (0:00:00.802) 0:00:09.120 ****** 2025-09-19 07:12:59.353231 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.353252 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.353264 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.353302 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.353315 | orchestrator | 2025-09-19 07:12:59.353326 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 07:12:59.353337 | orchestrator | Friday 19 September 2025 07:10:59 +0000 (0:00:00.152) 0:00:09.272 ****** 2025-09-19 07:12:59.353351 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd75843f980d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 07:10:56.182963', 'end': '2025-09-19 07:10:56.240325', 'delta': '0:00:00.057362', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d75843f980d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-19 07:12:59.353366 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '97d739be75d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 07:10:56.988229', 'end': '2025-09-19 07:10:57.034846', 'delta': '0:00:00.046617', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['97d739be75d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-19 07:12:59.353420 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '54a8d1bbae12', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 07:10:57.532360', 'end': '2025-09-19 07:10:57.575070', 'delta': '0:00:00.042710', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['54a8d1bbae12'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-19 07:12:59.353435 | orchestrator | 2025-09-19 07:12:59.353446 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 07:12:59.353457 | orchestrator | Friday 19 September 2025 07:10:59 +0000 (0:00:00.357) 0:00:09.629 ****** 2025-09-19 07:12:59.353468 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.353480 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.353491 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.353502 | orchestrator | 2025-09-19 07:12:59.353513 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 07:12:59.353525 | orchestrator | Friday 19 September 2025 07:10:59 +0000 (0:00:00.459) 0:00:10.089 ****** 2025-09-19 07:12:59.353536 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-19 07:12:59.353547 | orchestrator | 2025-09-19 07:12:59.353558 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 07:12:59.353569 | orchestrator | Friday 19 September 2025 07:11:01 +0000 (0:00:01.707) 0:00:11.797 ****** 2025-09-19 07:12:59.353580 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.353591 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.353603 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.353614 | orchestrator | 2025-09-19 07:12:59.353625 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 07:12:59.353636 | orchestrator | Friday 19 September 2025 07:11:01 +0000 (0:00:00.264) 0:00:12.061 ****** 2025-09-19 07:12:59.353647 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.353658 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.353669 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.353681 | orchestrator | 2025-09-19 07:12:59.353698 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 07:12:59.353709 | orchestrator | Friday 19 September 2025 07:11:02 +0000 (0:00:00.401) 0:00:12.462 ****** 2025-09-19 07:12:59.353721 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.353732 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.353743 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.353754 | orchestrator | 2025-09-19 07:12:59.353765 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 07:12:59.353776 | orchestrator | Friday 19 September 2025 07:11:02 +0000 (0:00:00.385) 0:00:12.848 ****** 2025-09-19 07:12:59.353788 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.353799 | orchestrator | 2025-09-19 07:12:59.353810 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 07:12:59.353821 | orchestrator | Friday 19 September 2025 07:11:02 +0000 (0:00:00.117) 0:00:12.965 ****** 2025-09-19 07:12:59.353970 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.353986 | orchestrator | 2025-09-19 07:12:59.353998 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 07:12:59.354008 | orchestrator | Friday 19 September 2025 07:11:02 +0000 (0:00:00.207) 0:00:13.173 ****** 2025-09-19 07:12:59.354076 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.354088 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.354098 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.354108 | orchestrator | 2025-09-19 07:12:59.354118 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 07:12:59.354138 | orchestrator | Friday 19 September 2025 07:11:03 +0000 (0:00:00.281) 0:00:13.455 ****** 2025-09-19 07:12:59.354148 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.354157 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.354167 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.354177 | orchestrator | 2025-09-19 07:12:59.354188 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 07:12:59.354198 | orchestrator | Friday 19 September 2025 07:11:03 +0000 (0:00:00.303) 0:00:13.758 ****** 2025-09-19 07:12:59.354208 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.354218 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.354228 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.354238 | orchestrator | 2025-09-19 07:12:59.354248 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 07:12:59.354258 | orchestrator | Friday 19 September 2025 07:11:03 +0000 (0:00:00.401) 0:00:14.160 ****** 2025-09-19 07:12:59.354268 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.354278 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.354288 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.354298 | orchestrator | 2025-09-19 07:12:59.354308 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 07:12:59.354318 | orchestrator | Friday 19 September 2025 07:11:04 +0000 (0:00:00.260) 0:00:14.421 ****** 2025-09-19 07:12:59.354399 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.354409 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.354420 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.354430 | orchestrator | 2025-09-19 07:12:59.354439 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 07:12:59.354450 | orchestrator | Friday 19 September 2025 07:11:04 +0000 (0:00:00.302) 0:00:14.724 ****** 2025-09-19 07:12:59.354460 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.354470 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.354480 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.354490 | orchestrator | 2025-09-19 07:12:59.354500 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 07:12:59.354555 | orchestrator | Friday 19 September 2025 07:11:04 +0000 (0:00:00.283) 0:00:15.007 ****** 2025-09-19 07:12:59.354568 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.354578 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.354588 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.354598 | orchestrator | 2025-09-19 07:12:59.354608 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 07:12:59.354618 | orchestrator | Friday 19 September 2025 07:11:05 +0000 (0:00:00.403) 0:00:15.411 ****** 2025-09-19 07:12:59.354630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--787edb9c--1668--5795--8146--b6ac8c49142c-osd--block--787edb9c--1668--5795--8146--b6ac8c49142c', 'dm-uuid-LVM-df8XvXdoHIGkJefp0HH7ZFWONVQKENIEH8wfeuA4imBqhnBxb1pYjK5IgKNUowlj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af475f18--71a6--5278--b018--36a08189cb1c-osd--block--af475f18--71a6--5278--b018--36a08189cb1c', 'dm-uuid-LVM-4pb1QPgTa7PYbQ2Pi1TxExoVZ2rv7oE0fQxtBLHrJrDVqmOhdo6Bx4lKLzXwEcrF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part1', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part14', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part15', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part16', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.354808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5631a8c0--2403--5b6d--b4ab--3f734fe52f75-osd--block--5631a8c0--2403--5b6d--b4ab--3f734fe52f75', 'dm-uuid-LVM-8FGxhz9XQMPcCWZM3pRrQdYdN4aupjGl8dI6hjzypij1bYPApneewuh1kDUkpKry'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--787edb9c--1668--5795--8146--b6ac8c49142c-osd--block--787edb9c--1668--5795--8146--b6ac8c49142c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WGeVUS-N1Mf-BB3U-v4Ty-F8zL-2ouv-RgTscQ', 'scsi-0QEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c', 'scsi-SQEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.354880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--32fceb46--e08d--5445--84d6--a85b98e59ab0-osd--block--32fceb46--e08d--5445--84d6--a85b98e59ab0', 'dm-uuid-LVM-587HvxXipBJ4T3nrPgDJLDlXup2mDr2wuf3F1Fe4cf0wd8hu1mNB4rKs7oD1MKGi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--af475f18--71a6--5278--b018--36a08189cb1c-osd--block--af475f18--71a6--5278--b018--36a08189cb1c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6sv3aY-kbty-dkce-zN13-8qIJ-2Sck-zjAAQo', 'scsi-0QEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8', 'scsi-SQEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.354913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301', 'scsi-SQEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.354924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.354946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.354956 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.354995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2af2e838--b751--5a2f--ab09--cbc0dc745073-osd--block--2af2e838--b751--5a2f--ab09--cbc0dc745073', 'dm-uuid-LVM-stnS00GaKqmnkIfk0RfxskLg1ZJTWmtFpfznfUsoNpRCwb8nwwfI6Oqo6xQHFpUa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part1', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part14', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part15', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part16', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.355129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5631a8c0--2403--5b6d--b4ab--3f734fe52f75-osd--block--5631a8c0--2403--5b6d--b4ab--3f734fe52f75'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ftiMBK-3syo-qzxd-buQ2-NTAu-qnjQ-3YjiVV', 'scsi-0QEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97', 'scsi-SQEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.355145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03228564--3151--5027--920d--737061be0eca-osd--block--03228564--3151--5027--920d--737061be0eca', 'dm-uuid-LVM-eI6w1uc0XkNtnqpOQjt0bpJDUwBAvRDMkQ65lj4tyaEBdNJzRpKBEpWbpQ4ys0Zz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--32fceb46--e08d--5445--84d6--a85b98e59ab0-osd--block--32fceb46--e08d--5445--84d6--a85b98e59ab0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qdrrtu-Epqe-kEGe-GCqz-8pei-2gK0-ll8Cgo', 'scsi-0QEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508', 'scsi-SQEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.355183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b', 'scsi-SQEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.355216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.355247 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.355262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:59.355331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part1', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part14', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part15', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part16', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.355357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2af2e838--b751--5a2f--ab09--cbc0dc745073-osd--block--2af2e838--b751--5a2f--ab09--cbc0dc745073'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Yyfqwl-HK9C-vUWq-ezQ3-J1x4-v9wL-Z7Zvjt', 'scsi-0QEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2', 'scsi-SQEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.355371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--03228564--3151--5027--920d--737061be0eca-osd--block--03228564--3151--5027--920d--737061be0eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4gNePi-p6bZ-PnsU-Kexi-wYB8-ohCZ-z8YGsJ', 'scsi-0QEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39', 'scsi-SQEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.355384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528', 'scsi-SQEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.355400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:59.355411 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.355426 | orchestrator | 2025-09-19 07:12:59.355437 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 07:12:59.355447 | orchestrator | Friday 19 September 2025 07:11:05 +0000 (0:00:00.471) 0:00:15.882 ****** 2025-09-19 07:12:59.355458 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--787edb9c--1668--5795--8146--b6ac8c49142c-osd--block--787edb9c--1668--5795--8146--b6ac8c49142c', 'dm-uuid-LVM-df8XvXdoHIGkJefp0HH7ZFWONVQKENIEH8wfeuA4imBqhnBxb1pYjK5IgKNUowlj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355473 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af475f18--71a6--5278--b018--36a08189cb1c-osd--block--af475f18--71a6--5278--b018--36a08189cb1c', 'dm-uuid-LVM-4pb1QPgTa7PYbQ2Pi1TxExoVZ2rv7oE0fQxtBLHrJrDVqmOhdo6Bx4lKLzXwEcrF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355484 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355495 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355575 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part1', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part14', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part15', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part16', 'scsi-SQEMU_QEMU_HARDDISK_17cc47dd-17d3-4cf6-ba1c-5f9dd041cc1a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5631a8c0--2403--5b6d--b4ab--3f734fe52f75-osd--block--5631a8c0--2403--5b6d--b4ab--3f734fe52f75', 'dm-uuid-LVM-8FGxhz9XQMPcCWZM3pRrQdYdN4aupjGl8dI6hjzypij1bYPApneewuh1kDUkpKry'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--787edb9c--1668--5795--8146--b6ac8c49142c-osd--block--787edb9c--1668--5795--8146--b6ac8c49142c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WGeVUS-N1Mf-BB3U-v4Ty-F8zL-2ouv-RgTscQ', 'scsi-0QEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c', 'scsi-SQEMU_QEMU_HARDDISK_a2591162-fd7d-4f7c-a24f-a875e0bfaf5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355639 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--32fceb46--e08d--5445--84d6--a85b98e59ab0-osd--block--32fceb46--e08d--5445--84d6--a85b98e59ab0', 'dm-uuid-LVM-587HvxXipBJ4T3nrPgDJLDlXup2mDr2wuf3F1Fe4cf0wd8hu1mNB4rKs7oD1MKGi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--af475f18--71a6--5278--b018--36a08189cb1c-osd--block--af475f18--71a6--5278--b018--36a08189cb1c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6sv3aY-kbty-dkce-zN13-8qIJ-2Sck-zjAAQo', 'scsi-0QEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8', 'scsi-SQEMU_QEMU_HARDDISK_1117915d-c4ec-4d47-9877-c3f2a311bdd8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355673 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301', 'scsi-SQEMU_QEMU_HARDDISK_af8571bd-f20f-46c1-9b84-53d29d179301'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355722 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.355732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355765 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355775 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355796 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355806 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2af2e838--b751--5a2f--ab09--cbc0dc745073-osd--block--2af2e838--b751--5a2f--ab09--cbc0dc745073', 'dm-uuid-LVM-stnS00GaKqmnkIfk0RfxskLg1ZJTWmtFpfznfUsoNpRCwb8nwwfI6Oqo6xQHFpUa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355922 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part1', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part14', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part15', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part16', 'scsi-SQEMU_QEMU_HARDDISK_19b572c9-891c-4fc6-a34f-184d2479a4fd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03228564--3151--5027--920d--737061be0eca-osd--block--03228564--3151--5027--920d--737061be0eca', 'dm-uuid-LVM-eI6w1uc0XkNtnqpOQjt0bpJDUwBAvRDMkQ65lj4tyaEBdNJzRpKBEpWbpQ4ys0Zz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5631a8c0--2403--5b6d--b4ab--3f734fe52f75-osd--block--5631a8c0--2403--5b6d--b4ab--3f734fe52f75'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ftiMBK-3syo-qzxd-buQ2-NTAu-qnjQ-3YjiVV', 'scsi-0QEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97', 'scsi-SQEMU_QEMU_HARDDISK_9b35f7c3-f4ee-4f20-a638-8acbecbf2b97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--32fceb46--e08d--5445--84d6--a85b98e59ab0-osd--block--32fceb46--e08d--5445--84d6--a85b98e59ab0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qdrrtu-Epqe-kEGe-GCqz-8pei-2gK0-ll8Cgo', 'scsi-0QEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508', 'scsi-SQEMU_QEMU_HARDDISK_0ec87ec4-de78-4354-a913-8c3da733e508'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.355994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b', 'scsi-SQEMU_QEMU_HARDDISK_f326ea53-fd8a-4d1e-8637-ed74e9f7229b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356015 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356038 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.356051 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356059 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356068 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356080 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356089 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356103 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part1', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part14', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part15', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part16', 'scsi-SQEMU_QEMU_HARDDISK_09d1dc7c-0142-46b7-bfb8-c4846e18939d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2af2e838--b751--5a2f--ab09--cbc0dc745073-osd--block--2af2e838--b751--5a2f--ab09--cbc0dc745073'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Yyfqwl-HK9C-vUWq-ezQ3-J1x4-v9wL-Z7Zvjt', 'scsi-0QEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2', 'scsi-SQEMU_QEMU_HARDDISK_1f9d1cec-7d6c-4c71-8749-cd7e53c954b2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356130 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--03228564--3151--5027--920d--737061be0eca-osd--block--03228564--3151--5027--920d--737061be0eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4gNePi-p6bZ-PnsU-Kexi-wYB8-ohCZ-z8YGsJ', 'scsi-0QEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39', 'scsi-SQEMU_QEMU_HARDDISK_68d7532d-29ea-4f3d-b7b6-675f70301c39'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356139 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528', 'scsi-SQEMU_QEMU_HARDDISK_c8e79e65-71f7-4ae8-8fa4-6c07ef757528'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356157 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:59.356165 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.356174 | orchestrator | 2025-09-19 07:12:59.356182 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 07:12:59.356191 | orchestrator | Friday 19 September 2025 07:11:06 +0000 (0:00:00.507) 0:00:16.389 ****** 2025-09-19 07:12:59.356199 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.356207 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.356215 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.356223 | orchestrator | 2025-09-19 07:12:59.356232 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 07:12:59.356240 | orchestrator | Friday 19 September 2025 07:11:06 +0000 (0:00:00.620) 0:00:17.010 ****** 2025-09-19 07:12:59.356248 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.356256 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.356264 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.356272 | orchestrator | 2025-09-19 07:12:59.356280 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 07:12:59.356288 | orchestrator | Friday 19 September 2025 07:11:07 +0000 (0:00:00.365) 0:00:17.375 ****** 2025-09-19 07:12:59.356296 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.356305 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.356313 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.356321 | orchestrator | 2025-09-19 07:12:59.356329 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 07:12:59.356337 | orchestrator | Friday 19 September 2025 07:11:07 +0000 (0:00:00.625) 0:00:18.000 ****** 2025-09-19 07:12:59.356345 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.356353 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.356361 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.356369 | orchestrator | 2025-09-19 07:12:59.356378 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 07:12:59.356386 | orchestrator | Friday 19 September 2025 07:11:08 +0000 (0:00:00.232) 0:00:18.233 ****** 2025-09-19 07:12:59.356394 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.356402 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.356410 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.356418 | orchestrator | 2025-09-19 07:12:59.356430 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 07:12:59.356438 | orchestrator | Friday 19 September 2025 07:11:08 +0000 (0:00:00.343) 0:00:18.577 ****** 2025-09-19 07:12:59.356451 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.356459 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.356467 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.356476 | orchestrator | 2025-09-19 07:12:59.356484 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 07:12:59.356492 | orchestrator | Friday 19 September 2025 07:11:08 +0000 (0:00:00.411) 0:00:18.988 ****** 2025-09-19 07:12:59.356500 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 07:12:59.356509 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 07:12:59.356517 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 07:12:59.356525 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 07:12:59.356534 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 07:12:59.356542 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 07:12:59.356549 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 07:12:59.356558 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 07:12:59.356566 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 07:12:59.356574 | orchestrator | 2025-09-19 07:12:59.356582 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 07:12:59.356590 | orchestrator | Friday 19 September 2025 07:11:09 +0000 (0:00:00.938) 0:00:19.927 ****** 2025-09-19 07:12:59.356598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 07:12:59.356606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 07:12:59.356614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 07:12:59.356622 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.356630 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 07:12:59.356638 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 07:12:59.356646 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 07:12:59.356654 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.356662 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 07:12:59.356670 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 07:12:59.356678 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 07:12:59.356686 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.356694 | orchestrator | 2025-09-19 07:12:59.356703 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 07:12:59.356711 | orchestrator | Friday 19 September 2025 07:11:10 +0000 (0:00:00.330) 0:00:20.258 ****** 2025-09-19 07:12:59.356719 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:59.356727 | orchestrator | 2025-09-19 07:12:59.356735 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 07:12:59.356744 | orchestrator | Friday 19 September 2025 07:11:10 +0000 (0:00:00.734) 0:00:20.992 ****** 2025-09-19 07:12:59.356752 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.356760 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.356768 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.356776 | orchestrator | 2025-09-19 07:12:59.356788 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 07:12:59.356797 | orchestrator | Friday 19 September 2025 07:11:11 +0000 (0:00:00.362) 0:00:21.355 ****** 2025-09-19 07:12:59.356805 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.356813 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.356821 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.356844 | orchestrator | 2025-09-19 07:12:59.356853 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 07:12:59.356861 | orchestrator | Friday 19 September 2025 07:11:11 +0000 (0:00:00.335) 0:00:21.690 ****** 2025-09-19 07:12:59.356875 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.356883 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.356891 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:59.356899 | orchestrator | 2025-09-19 07:12:59.356907 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 07:12:59.356915 | orchestrator | Friday 19 September 2025 07:11:11 +0000 (0:00:00.313) 0:00:22.004 ****** 2025-09-19 07:12:59.356924 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.356932 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.356940 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.356948 | orchestrator | 2025-09-19 07:12:59.356956 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 07:12:59.356964 | orchestrator | Friday 19 September 2025 07:11:12 +0000 (0:00:00.607) 0:00:22.611 ****** 2025-09-19 07:12:59.356972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:59.356980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:59.356988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:59.356996 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.357004 | orchestrator | 2025-09-19 07:12:59.357013 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 07:12:59.357021 | orchestrator | Friday 19 September 2025 07:11:12 +0000 (0:00:00.420) 0:00:23.031 ****** 2025-09-19 07:12:59.357029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:59.357037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:59.357045 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:59.357053 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.357061 | orchestrator | 2025-09-19 07:12:59.357069 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 07:12:59.357083 | orchestrator | Friday 19 September 2025 07:11:13 +0000 (0:00:00.383) 0:00:23.415 ****** 2025-09-19 07:12:59.357091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:59.357100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:59.357108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:59.357116 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.357124 | orchestrator | 2025-09-19 07:12:59.357132 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 07:12:59.357140 | orchestrator | Friday 19 September 2025 07:11:13 +0000 (0:00:00.353) 0:00:23.769 ****** 2025-09-19 07:12:59.357148 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:59.357156 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:59.357164 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:59.357172 | orchestrator | 2025-09-19 07:12:59.357180 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 07:12:59.357189 | orchestrator | Friday 19 September 2025 07:11:13 +0000 (0:00:00.309) 0:00:24.078 ****** 2025-09-19 07:12:59.357197 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 07:12:59.357205 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 07:12:59.357213 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 07:12:59.357221 | orchestrator | 2025-09-19 07:12:59.357229 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 07:12:59.357237 | orchestrator | Friday 19 September 2025 07:11:14 +0000 (0:00:00.549) 0:00:24.627 ****** 2025-09-19 07:12:59.357245 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:12:59.357254 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:12:59.357262 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:12:59.357270 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 07:12:59.357278 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 07:12:59.357292 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 07:12:59.357300 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 07:12:59.357308 | orchestrator | 2025-09-19 07:12:59.357316 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 07:12:59.357324 | orchestrator | Friday 19 September 2025 07:11:15 +0000 (0:00:00.987) 0:00:25.615 ****** 2025-09-19 07:12:59.357332 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:12:59.357340 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:12:59.357348 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:12:59.357356 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 07:12:59.357364 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 07:12:59.357372 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 07:12:59.357381 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 07:12:59.357389 | orchestrator | 2025-09-19 07:12:59.357400 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-19 07:12:59.357409 | orchestrator | Friday 19 September 2025 07:11:17 +0000 (0:00:01.819) 0:00:27.434 ****** 2025-09-19 07:12:59.357417 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:59.357425 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:59.357433 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-19 07:12:59.357441 | orchestrator | 2025-09-19 07:12:59.357449 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-19 07:12:59.357457 | orchestrator | Friday 19 September 2025 07:11:17 +0000 (0:00:00.339) 0:00:27.773 ****** 2025-09-19 07:12:59.357466 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:12:59.357475 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:12:59.357484 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:12:59.357492 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:12:59.357504 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:12:59.357512 | orchestrator | 2025-09-19 07:12:59.357521 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-19 07:12:59.357529 | orchestrator | Friday 19 September 2025 07:12:03 +0000 (0:00:46.018) 0:01:13.792 ****** 2025-09-19 07:12:59.357537 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357551 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357559 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357567 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357575 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357583 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357591 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-19 07:12:59.357599 | orchestrator | 2025-09-19 07:12:59.357607 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-19 07:12:59.357615 | orchestrator | Friday 19 September 2025 07:12:27 +0000 (0:00:23.962) 0:01:37.755 ****** 2025-09-19 07:12:59.357623 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357631 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357639 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357647 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357655 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357663 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357672 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:12:59.357679 | orchestrator | 2025-09-19 07:12:59.357687 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-19 07:12:59.357695 | orchestrator | Friday 19 September 2025 07:12:39 +0000 (0:00:11.839) 0:01:49.595 ****** 2025-09-19 07:12:59.357704 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357712 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:12:59.357720 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:12:59.357728 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357736 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:12:59.357744 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:12:59.357756 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357764 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:12:59.357772 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:12:59.357780 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357788 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:12:59.357796 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:12:59.357804 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357812 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:12:59.357820 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:12:59.357842 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:59.357850 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:12:59.357858 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:12:59.357866 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-19 07:12:59.357880 | orchestrator | 2025-09-19 07:12:59.357888 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:12:59.357896 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-19 07:12:59.357905 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 07:12:59.357913 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 07:12:59.357921 | orchestrator | 2025-09-19 07:12:59.357930 | orchestrator | 2025-09-19 07:12:59.357938 | orchestrator | 2025-09-19 07:12:59.357946 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:12:59.357959 | orchestrator | Friday 19 September 2025 07:12:56 +0000 (0:00:17.512) 0:02:07.107 ****** 2025-09-19 07:12:59.357967 | orchestrator | =============================================================================== 2025-09-19 07:12:59.357975 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.02s 2025-09-19 07:12:59.357983 | orchestrator | generate keys ---------------------------------------------------------- 23.96s 2025-09-19 07:12:59.357991 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.51s 2025-09-19 07:12:59.357999 | orchestrator | get keys from monitors ------------------------------------------------- 11.84s 2025-09-19 07:12:59.358007 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.22s 2025-09-19 07:12:59.358054 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.82s 2025-09-19 07:12:59.358065 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.71s 2025-09-19 07:12:59.358073 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.99s 2025-09-19 07:12:59.358081 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.94s 2025-09-19 07:12:59.358090 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2025-09-19 07:12:59.358098 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.77s 2025-09-19 07:12:59.358106 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2025-09-19 07:12:59.358114 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2025-09-19 07:12:59.358122 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2025-09-19 07:12:59.358131 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2025-09-19 07:12:59.358139 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.62s 2025-09-19 07:12:59.358147 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.61s 2025-09-19 07:12:59.358155 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.60s 2025-09-19 07:12:59.358163 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.55s 2025-09-19 07:12:59.358171 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.51s 2025-09-19 07:12:59.358180 | orchestrator | 2025-09-19 07:12:59 | INFO  | Task 24888692-b0db-444b-91dc-629167508591 is in state STARTED 2025-09-19 07:12:59.358188 | orchestrator | 2025-09-19 07:12:59 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:12:59.358196 | orchestrator | 2025-09-19 07:12:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:02.401597 | orchestrator | 2025-09-19 07:13:02 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:02.401703 | orchestrator | 2025-09-19 07:13:02 | INFO  | Task 24888692-b0db-444b-91dc-629167508591 is in state STARTED 2025-09-19 07:13:02.402247 | orchestrator | 2025-09-19 07:13:02 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:02.402304 | orchestrator | 2025-09-19 07:13:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:05.433769 | orchestrator | 2025-09-19 07:13:05 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:05.434602 | orchestrator | 2025-09-19 07:13:05 | INFO  | Task 24888692-b0db-444b-91dc-629167508591 is in state STARTED 2025-09-19 07:13:05.436619 | orchestrator | 2025-09-19 07:13:05 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:05.436647 | orchestrator | 2025-09-19 07:13:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:08.468878 | orchestrator | 2025-09-19 07:13:08 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:08.471656 | orchestrator | 2025-09-19 07:13:08 | INFO  | Task 24888692-b0db-444b-91dc-629167508591 is in state STARTED 2025-09-19 07:13:08.474450 | orchestrator | 2025-09-19 07:13:08 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:08.474477 | orchestrator | 2025-09-19 07:13:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:11.518867 | orchestrator | 2025-09-19 07:13:11 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:11.519379 | orchestrator | 2025-09-19 07:13:11 | INFO  | Task 24888692-b0db-444b-91dc-629167508591 is in state STARTED 2025-09-19 07:13:11.521319 | orchestrator | 2025-09-19 07:13:11 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:11.521333 | orchestrator | 2025-09-19 07:13:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:14.573638 | orchestrator | 2025-09-19 07:13:14 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:14.575299 | orchestrator | 2025-09-19 07:13:14 | INFO  | Task 24888692-b0db-444b-91dc-629167508591 is in state STARTED 2025-09-19 07:13:14.577982 | orchestrator | 2025-09-19 07:13:14 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:14.578112 | orchestrator | 2025-09-19 07:13:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:17.627791 | orchestrator | 2025-09-19 07:13:17 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:17.628993 | orchestrator | 2025-09-19 07:13:17 | INFO  | Task 24888692-b0db-444b-91dc-629167508591 is in state STARTED 2025-09-19 07:13:17.630380 | orchestrator | 2025-09-19 07:13:17 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:17.630469 | orchestrator | 2025-09-19 07:13:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:20.671490 | orchestrator | 2025-09-19 07:13:20 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:20.673082 | orchestrator | 2025-09-19 07:13:20 | INFO  | Task 24888692-b0db-444b-91dc-629167508591 is in state STARTED 2025-09-19 07:13:20.674879 | orchestrator | 2025-09-19 07:13:20 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:20.674910 | orchestrator | 2025-09-19 07:13:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:23.722776 | orchestrator | 2025-09-19 07:13:23 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:23.723762 | orchestrator | 2025-09-19 07:13:23 | INFO  | Task 24888692-b0db-444b-91dc-629167508591 is in state STARTED 2025-09-19 07:13:23.725015 | orchestrator | 2025-09-19 07:13:23 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:23.725196 | orchestrator | 2025-09-19 07:13:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:26.776656 | orchestrator | 2025-09-19 07:13:26 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:26.777400 | orchestrator | 2025-09-19 07:13:26 | INFO  | Task 24888692-b0db-444b-91dc-629167508591 is in state SUCCESS 2025-09-19 07:13:26.779464 | orchestrator | 2025-09-19 07:13:26 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:26.779512 | orchestrator | 2025-09-19 07:13:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:29.825412 | orchestrator | 2025-09-19 07:13:29 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:29.826457 | orchestrator | 2025-09-19 07:13:29 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:13:29.828729 | orchestrator | 2025-09-19 07:13:29 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:29.828774 | orchestrator | 2025-09-19 07:13:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:32.878288 | orchestrator | 2025-09-19 07:13:32 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:32.879894 | orchestrator | 2025-09-19 07:13:32 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:13:32.881210 | orchestrator | 2025-09-19 07:13:32 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:32.881418 | orchestrator | 2025-09-19 07:13:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:35.925339 | orchestrator | 2025-09-19 07:13:35 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:35.927538 | orchestrator | 2025-09-19 07:13:35 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:13:35.930363 | orchestrator | 2025-09-19 07:13:35 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:35.930401 | orchestrator | 2025-09-19 07:13:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:38.957106 | orchestrator | 2025-09-19 07:13:38 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:38.958130 | orchestrator | 2025-09-19 07:13:38 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:13:38.959094 | orchestrator | 2025-09-19 07:13:38 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:38.959130 | orchestrator | 2025-09-19 07:13:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:41.997191 | orchestrator | 2025-09-19 07:13:41 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:41.999635 | orchestrator | 2025-09-19 07:13:41 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:13:42.002064 | orchestrator | 2025-09-19 07:13:42 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:42.002104 | orchestrator | 2025-09-19 07:13:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:45.053420 | orchestrator | 2025-09-19 07:13:45 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:45.054709 | orchestrator | 2025-09-19 07:13:45 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:13:45.057652 | orchestrator | 2025-09-19 07:13:45 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:45.057704 | orchestrator | 2025-09-19 07:13:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:48.101052 | orchestrator | 2025-09-19 07:13:48 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:48.102263 | orchestrator | 2025-09-19 07:13:48 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:13:48.105221 | orchestrator | 2025-09-19 07:13:48 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:48.105294 | orchestrator | 2025-09-19 07:13:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:51.154641 | orchestrator | 2025-09-19 07:13:51 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:51.154737 | orchestrator | 2025-09-19 07:13:51 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:13:51.159218 | orchestrator | 2025-09-19 07:13:51 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:51.159256 | orchestrator | 2025-09-19 07:13:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:54.199631 | orchestrator | 2025-09-19 07:13:54 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:54.201060 | orchestrator | 2025-09-19 07:13:54 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:13:54.202328 | orchestrator | 2025-09-19 07:13:54 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:54.202368 | orchestrator | 2025-09-19 07:13:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:57.241105 | orchestrator | 2025-09-19 07:13:57 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:13:57.243043 | orchestrator | 2025-09-19 07:13:57 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:13:57.245935 | orchestrator | 2025-09-19 07:13:57 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:13:57.246085 | orchestrator | 2025-09-19 07:13:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:00.288243 | orchestrator | 2025-09-19 07:14:00 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state STARTED 2025-09-19 07:14:00.289650 | orchestrator | 2025-09-19 07:14:00 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:14:00.292267 | orchestrator | 2025-09-19 07:14:00 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:00.292315 | orchestrator | 2025-09-19 07:14:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:03.329885 | orchestrator | 2025-09-19 07:14:03 | INFO  | Task a6e19c9a-d7a5-4271-a0bb-26ff8169d02d is in state SUCCESS 2025-09-19 07:14:03.331162 | orchestrator | 2025-09-19 07:14:03.331456 | orchestrator | 2025-09-19 07:14:03.331477 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-19 07:14:03.331490 | orchestrator | 2025-09-19 07:14:03.331502 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-19 07:14:03.331514 | orchestrator | Friday 19 September 2025 07:13:00 +0000 (0:00:00.171) 0:00:00.171 ****** 2025-09-19 07:14:03.331526 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-19 07:14:03.331538 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 07:14:03.331549 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 07:14:03.331560 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:14:03.331571 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 07:14:03.331608 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-19 07:14:03.331634 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-19 07:14:03.331646 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-19 07:14:03.331657 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-19 07:14:03.331669 | orchestrator | 2025-09-19 07:14:03.331680 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-19 07:14:03.331691 | orchestrator | Friday 19 September 2025 07:13:04 +0000 (0:00:03.981) 0:00:04.152 ****** 2025-09-19 07:14:03.331703 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 07:14:03.331715 | orchestrator | 2025-09-19 07:14:03.331726 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-19 07:14:03.331737 | orchestrator | Friday 19 September 2025 07:13:05 +0000 (0:00:01.009) 0:00:05.161 ****** 2025-09-19 07:14:03.331748 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-19 07:14:03.331760 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 07:14:03.331771 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 07:14:03.331803 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:14:03.331814 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 07:14:03.331826 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-19 07:14:03.331837 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-19 07:14:03.331848 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-19 07:14:03.331859 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-19 07:14:03.331871 | orchestrator | 2025-09-19 07:14:03.331882 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-19 07:14:03.331893 | orchestrator | Friday 19 September 2025 07:13:18 +0000 (0:00:12.834) 0:00:17.996 ****** 2025-09-19 07:14:03.331906 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-19 07:14:03.331917 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 07:14:03.331928 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 07:14:03.331939 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:14:03.331950 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 07:14:03.331962 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-19 07:14:03.331973 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-19 07:14:03.331984 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-19 07:14:03.331995 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-19 07:14:03.332006 | orchestrator | 2025-09-19 07:14:03.332017 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:14:03.332029 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:14:03.332042 | orchestrator | 2025-09-19 07:14:03.332053 | orchestrator | 2025-09-19 07:14:03.332067 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:14:03.332080 | orchestrator | Friday 19 September 2025 07:13:25 +0000 (0:00:06.785) 0:00:24.782 ****** 2025-09-19 07:14:03.332093 | orchestrator | =============================================================================== 2025-09-19 07:14:03.332106 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.83s 2025-09-19 07:14:03.332128 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.79s 2025-09-19 07:14:03.332140 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.98s 2025-09-19 07:14:03.332151 | orchestrator | Create share directory -------------------------------------------------- 1.01s 2025-09-19 07:14:03.332162 | orchestrator | 2025-09-19 07:14:03.332173 | orchestrator | 2025-09-19 07:14:03.332185 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:14:03.332197 | orchestrator | 2025-09-19 07:14:03.332318 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:14:03.332336 | orchestrator | Friday 19 September 2025 07:12:14 +0000 (0:00:00.273) 0:00:00.273 ****** 2025-09-19 07:14:03.332348 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.332360 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.332371 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.332382 | orchestrator | 2025-09-19 07:14:03.332394 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:14:03.332405 | orchestrator | Friday 19 September 2025 07:12:14 +0000 (0:00:00.301) 0:00:00.575 ****** 2025-09-19 07:14:03.332416 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-19 07:14:03.332428 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-19 07:14:03.332439 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-19 07:14:03.332450 | orchestrator | 2025-09-19 07:14:03.332461 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-19 07:14:03.332473 | orchestrator | 2025-09-19 07:14:03.332484 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 07:14:03.332495 | orchestrator | Friday 19 September 2025 07:12:15 +0000 (0:00:00.415) 0:00:00.990 ****** 2025-09-19 07:14:03.332506 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:14:03.332517 | orchestrator | 2025-09-19 07:14:03.332536 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-19 07:14:03.332548 | orchestrator | Friday 19 September 2025 07:12:15 +0000 (0:00:00.518) 0:00:01.509 ****** 2025-09-19 07:14:03.332567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:14:03.332613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:14:03.332629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:14:03.332649 | orchestrator | 2025-09-19 07:14:03.332661 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-19 07:14:03.332672 | orchestrator | Friday 19 September 2025 07:12:16 +0000 (0:00:01.199) 0:00:02.708 ****** 2025-09-19 07:14:03.332684 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.332695 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.332706 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.332717 | orchestrator | 2025-09-19 07:14:03.332729 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 07:14:03.332740 | orchestrator | Friday 19 September 2025 07:12:17 +0000 (0:00:00.431) 0:00:03.140 ****** 2025-09-19 07:14:03.332752 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 07:14:03.332770 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 07:14:03.332821 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 07:14:03.332833 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 07:14:03.332844 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 07:14:03.332855 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 07:14:03.332867 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-19 07:14:03.332878 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 07:14:03.332889 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 07:14:03.332900 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 07:14:03.332912 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 07:14:03.332923 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 07:14:03.332939 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 07:14:03.332953 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 07:14:03.332966 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-19 07:14:03.332980 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 07:14:03.332992 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 07:14:03.333006 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 07:14:03.333018 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 07:14:03.333031 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 07:14:03.333044 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 07:14:03.333057 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 07:14:03.333071 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-19 07:14:03.333084 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 07:14:03.333104 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-19 07:14:03.333119 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-19 07:14:03.333133 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-19 07:14:03.333146 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-19 07:14:03.333159 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-19 07:14:03.333173 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-19 07:14:03.333185 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-19 07:14:03.333198 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-19 07:14:03.333211 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-19 07:14:03.333224 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-19 07:14:03.333238 | orchestrator | 2025-09-19 07:14:03.333251 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:14:03.333264 | orchestrator | Friday 19 September 2025 07:12:18 +0000 (0:00:00.736) 0:00:03.876 ****** 2025-09-19 07:14:03.333278 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.333290 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.333301 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.333313 | orchestrator | 2025-09-19 07:14:03.333324 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:14:03.333335 | orchestrator | Friday 19 September 2025 07:12:18 +0000 (0:00:00.297) 0:00:04.174 ****** 2025-09-19 07:14:03.333347 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.333359 | orchestrator | 2025-09-19 07:14:03.333376 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:14:03.333388 | orchestrator | Friday 19 September 2025 07:12:18 +0000 (0:00:00.118) 0:00:04.293 ****** 2025-09-19 07:14:03.333399 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.333411 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.333422 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.333433 | orchestrator | 2025-09-19 07:14:03.333445 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:14:03.333456 | orchestrator | Friday 19 September 2025 07:12:18 +0000 (0:00:00.450) 0:00:04.743 ****** 2025-09-19 07:14:03.333467 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.333479 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.333490 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.333501 | orchestrator | 2025-09-19 07:14:03.333512 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:14:03.333524 | orchestrator | Friday 19 September 2025 07:12:19 +0000 (0:00:00.316) 0:00:05.059 ****** 2025-09-19 07:14:03.333535 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.333546 | orchestrator | 2025-09-19 07:14:03.333557 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:14:03.333575 | orchestrator | Friday 19 September 2025 07:12:19 +0000 (0:00:00.140) 0:00:05.200 ****** 2025-09-19 07:14:03.333586 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.333598 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.333609 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.333620 | orchestrator | 2025-09-19 07:14:03.333636 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:14:03.333647 | orchestrator | Friday 19 September 2025 07:12:19 +0000 (0:00:00.272) 0:00:05.473 ****** 2025-09-19 07:14:03.333659 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.333670 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.333681 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.333692 | orchestrator | 2025-09-19 07:14:03.333704 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:14:03.333715 | orchestrator | Friday 19 September 2025 07:12:19 +0000 (0:00:00.317) 0:00:05.791 ****** 2025-09-19 07:14:03.333726 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.333738 | orchestrator | 2025-09-19 07:14:03.333749 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:14:03.333761 | orchestrator | Friday 19 September 2025 07:12:20 +0000 (0:00:00.364) 0:00:06.156 ****** 2025-09-19 07:14:03.333772 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.333837 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.333848 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.333860 | orchestrator | 2025-09-19 07:14:03.333871 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:14:03.333883 | orchestrator | Friday 19 September 2025 07:12:20 +0000 (0:00:00.324) 0:00:06.480 ****** 2025-09-19 07:14:03.333894 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.333905 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.333917 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.333928 | orchestrator | 2025-09-19 07:14:03.333940 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:14:03.333951 | orchestrator | Friday 19 September 2025 07:12:20 +0000 (0:00:00.318) 0:00:06.799 ****** 2025-09-19 07:14:03.333962 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.333973 | orchestrator | 2025-09-19 07:14:03.333985 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:14:03.333996 | orchestrator | Friday 19 September 2025 07:12:21 +0000 (0:00:00.143) 0:00:06.943 ****** 2025-09-19 07:14:03.334007 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.334062 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.334077 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.334088 | orchestrator | 2025-09-19 07:14:03.334099 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:14:03.334110 | orchestrator | Friday 19 September 2025 07:12:21 +0000 (0:00:00.322) 0:00:07.265 ****** 2025-09-19 07:14:03.334121 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.334133 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.334144 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.334154 | orchestrator | 2025-09-19 07:14:03.334164 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:14:03.334174 | orchestrator | Friday 19 September 2025 07:12:21 +0000 (0:00:00.530) 0:00:07.796 ****** 2025-09-19 07:14:03.334184 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.334194 | orchestrator | 2025-09-19 07:14:03.334204 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:14:03.334214 | orchestrator | Friday 19 September 2025 07:12:22 +0000 (0:00:00.138) 0:00:07.935 ****** 2025-09-19 07:14:03.334224 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.334234 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.334244 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.334254 | orchestrator | 2025-09-19 07:14:03.334264 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:14:03.334274 | orchestrator | Friday 19 September 2025 07:12:22 +0000 (0:00:00.361) 0:00:08.297 ****** 2025-09-19 07:14:03.334291 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.334301 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.334311 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.334321 | orchestrator | 2025-09-19 07:14:03.334331 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:14:03.334341 | orchestrator | Friday 19 September 2025 07:12:22 +0000 (0:00:00.300) 0:00:08.597 ****** 2025-09-19 07:14:03.334351 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.334361 | orchestrator | 2025-09-19 07:14:03.334371 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:14:03.334380 | orchestrator | Friday 19 September 2025 07:12:22 +0000 (0:00:00.128) 0:00:08.725 ****** 2025-09-19 07:14:03.334480 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.334491 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.334501 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.334511 | orchestrator | 2025-09-19 07:14:03.334521 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:14:03.334531 | orchestrator | Friday 19 September 2025 07:12:23 +0000 (0:00:00.445) 0:00:09.170 ****** 2025-09-19 07:14:03.334541 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.334559 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.334569 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.334579 | orchestrator | 2025-09-19 07:14:03.334590 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:14:03.334600 | orchestrator | Friday 19 September 2025 07:12:23 +0000 (0:00:00.325) 0:00:09.496 ****** 2025-09-19 07:14:03.334610 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.334620 | orchestrator | 2025-09-19 07:14:03.334693 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:14:03.334710 | orchestrator | Friday 19 September 2025 07:12:23 +0000 (0:00:00.128) 0:00:09.624 ****** 2025-09-19 07:14:03.334720 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.334729 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.334739 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.334749 | orchestrator | 2025-09-19 07:14:03.334759 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:14:03.334769 | orchestrator | Friday 19 September 2025 07:12:24 +0000 (0:00:00.312) 0:00:09.937 ****** 2025-09-19 07:14:03.334809 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.334827 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.334844 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.334860 | orchestrator | 2025-09-19 07:14:03.334877 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:14:03.334894 | orchestrator | Friday 19 September 2025 07:12:24 +0000 (0:00:00.318) 0:00:10.255 ****** 2025-09-19 07:14:03.334918 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.334933 | orchestrator | 2025-09-19 07:14:03.334944 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:14:03.334954 | orchestrator | Friday 19 September 2025 07:12:24 +0000 (0:00:00.111) 0:00:10.367 ****** 2025-09-19 07:14:03.334964 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.334974 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.334984 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.334994 | orchestrator | 2025-09-19 07:14:03.335003 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:14:03.335013 | orchestrator | Friday 19 September 2025 07:12:25 +0000 (0:00:00.509) 0:00:10.877 ****** 2025-09-19 07:14:03.335023 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.335033 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.335043 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.335053 | orchestrator | 2025-09-19 07:14:03.335063 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:14:03.335073 | orchestrator | Friday 19 September 2025 07:12:25 +0000 (0:00:00.309) 0:00:11.187 ****** 2025-09-19 07:14:03.335091 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.335101 | orchestrator | 2025-09-19 07:14:03.335111 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:14:03.335121 | orchestrator | Friday 19 September 2025 07:12:25 +0000 (0:00:00.129) 0:00:11.316 ****** 2025-09-19 07:14:03.335131 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.335141 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.335151 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.335161 | orchestrator | 2025-09-19 07:14:03.335171 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:14:03.335181 | orchestrator | Friday 19 September 2025 07:12:25 +0000 (0:00:00.321) 0:00:11.638 ****** 2025-09-19 07:14:03.335191 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:03.335201 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:03.335211 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:03.335221 | orchestrator | 2025-09-19 07:14:03.335230 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:14:03.335240 | orchestrator | Friday 19 September 2025 07:12:26 +0000 (0:00:00.501) 0:00:12.140 ****** 2025-09-19 07:14:03.335250 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.335260 | orchestrator | 2025-09-19 07:14:03.335270 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:14:03.335280 | orchestrator | Friday 19 September 2025 07:12:26 +0000 (0:00:00.121) 0:00:12.262 ****** 2025-09-19 07:14:03.335290 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.335300 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.335310 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.335320 | orchestrator | 2025-09-19 07:14:03.335330 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-19 07:14:03.335342 | orchestrator | Friday 19 September 2025 07:12:26 +0000 (0:00:00.301) 0:00:12.563 ****** 2025-09-19 07:14:03.335353 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:14:03.335365 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:14:03.335376 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:03.335387 | orchestrator | 2025-09-19 07:14:03.335399 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-19 07:14:03.335410 | orchestrator | Friday 19 September 2025 07:12:28 +0000 (0:00:01.654) 0:00:14.218 ****** 2025-09-19 07:14:03.335421 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 07:14:03.335433 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 07:14:03.335444 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 07:14:03.335456 | orchestrator | 2025-09-19 07:14:03.335467 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-19 07:14:03.335479 | orchestrator | Friday 19 September 2025 07:12:30 +0000 (0:00:01.916) 0:00:16.134 ****** 2025-09-19 07:14:03.335490 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 07:14:03.335502 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 07:14:03.335513 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 07:14:03.335525 | orchestrator | 2025-09-19 07:14:03.335536 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-19 07:14:03.335554 | orchestrator | Friday 19 September 2025 07:12:32 +0000 (0:00:02.361) 0:00:18.496 ****** 2025-09-19 07:14:03.335566 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 07:14:03.335577 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 07:14:03.335588 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 07:14:03.335611 | orchestrator | 2025-09-19 07:14:03.335622 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-19 07:14:03.335633 | orchestrator | Friday 19 September 2025 07:12:34 +0000 (0:00:01.539) 0:00:20.035 ****** 2025-09-19 07:14:03.335645 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.335656 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.335667 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.335679 | orchestrator | 2025-09-19 07:14:03.335690 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-19 07:14:03.335701 | orchestrator | Friday 19 September 2025 07:12:34 +0000 (0:00:00.298) 0:00:20.333 ****** 2025-09-19 07:14:03.335711 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.335721 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.335731 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.335741 | orchestrator | 2025-09-19 07:14:03.335756 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 07:14:03.335766 | orchestrator | Friday 19 September 2025 07:12:34 +0000 (0:00:00.282) 0:00:20.615 ****** 2025-09-19 07:14:03.335790 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:14:03.335801 | orchestrator | 2025-09-19 07:14:03.335811 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-19 07:14:03.335821 | orchestrator | Friday 19 September 2025 07:12:35 +0000 (0:00:00.804) 0:00:21.420 ****** 2025-09-19 07:14:03.335833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:14:03.335860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:14:03.335878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:14:03.335889 | orchestrator | 2025-09-19 07:14:03.335905 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-19 07:14:03.335915 | orchestrator | Friday 19 September 2025 07:12:37 +0000 (0:00:01.486) 0:00:22.907 ****** 2025-09-19 07:14:03.335939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:14:03.335951 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.335968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:14:03.335989 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.336007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:14:03.336019 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.336029 | orchestrator | 2025-09-19 07:14:03.336039 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-19 07:14:03.336049 | orchestrator | Friday 19 September 2025 07:12:37 +0000 (0:00:00.659) 0:00:23.566 ****** 2025-09-19 07:14:03.336066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:14:03.336083 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.336099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:14:03.336110 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.336127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:14:03.336160 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.336170 | orchestrator | 2025-09-19 07:14:03.336180 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-19 07:14:03.336190 | orchestrator | Friday 19 September 2025 07:12:38 +0000 (0:00:01.284) 0:00:24.851 ****** 2025-09-19 07:14:03.336200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:14:03.336229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:14:03.336242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:14:03.336259 | orchestrator | 2025-09-19 07:14:03.336269 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 07:14:03.336279 | orchestrator | Friday 19 September 2025 07:12:40 +0000 (0:00:01.642) 0:00:26.494 ****** 2025-09-19 07:14:03.336288 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:03.336298 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:03.336308 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:03.336318 | orchestrator | 2025-09-19 07:14:03.336328 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 07:14:03.336338 | orchestrator | Friday 19 September 2025 07:12:40 +0000 (0:00:00.313) 0:00:26.807 ****** 2025-09-19 07:14:03.336353 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:14:03.336363 | orchestrator | 2025-09-19 07:14:03.336373 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-19 07:14:03.336383 | orchestrator | Friday 19 September 2025 07:12:41 +0000 (0:00:00.855) 0:00:27.662 ****** 2025-09-19 07:14:03.336393 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:03.336403 | orchestrator | 2025-09-19 07:14:03.336413 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-19 07:14:03.336422 | orchestrator | Friday 19 September 2025 07:12:44 +0000 (0:00:02.221) 0:00:29.883 ****** 2025-09-19 07:14:03.336432 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:03.336442 | orchestrator | 2025-09-19 07:14:03.336452 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-19 07:14:03.336462 | orchestrator | Friday 19 September 2025 07:12:46 +0000 (0:00:02.360) 0:00:32.244 ****** 2025-09-19 07:14:03.336472 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:03.336482 | orchestrator | 2025-09-19 07:14:03.336492 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 07:14:03.336502 | orchestrator | Friday 19 September 2025 07:13:01 +0000 (0:00:14.975) 0:00:47.220 ****** 2025-09-19 07:14:03.336512 | orchestrator | 2025-09-19 07:14:03.336521 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 07:14:03.336531 | orchestrator | Friday 19 September 2025 07:13:01 +0000 (0:00:00.070) 0:00:47.291 ****** 2025-09-19 07:14:03.336541 | orchestrator | 2025-09-19 07:14:03.336559 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 07:14:03.336569 | orchestrator | Friday 19 September 2025 07:13:01 +0000 (0:00:00.067) 0:00:47.358 ****** 2025-09-19 07:14:03.336579 | orchestrator | 2025-09-19 07:14:03.336589 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-19 07:14:03.336599 | orchestrator | Friday 19 September 2025 07:13:01 +0000 (0:00:00.065) 0:00:47.423 ****** 2025-09-19 07:14:03.336609 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:03.336619 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:14:03.336629 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:14:03.336638 | orchestrator | 2025-09-19 07:14:03.336648 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:14:03.336658 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-19 07:14:03.336669 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 07:14:03.336679 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 07:14:03.336694 | orchestrator | 2025-09-19 07:14:03.336704 | orchestrator | 2025-09-19 07:14:03.336715 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:14:03.336724 | orchestrator | Friday 19 September 2025 07:14:00 +0000 (0:00:58.570) 0:01:45.994 ****** 2025-09-19 07:14:03.336734 | orchestrator | =============================================================================== 2025-09-19 07:14:03.336744 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.57s 2025-09-19 07:14:03.336754 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.98s 2025-09-19 07:14:03.336764 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.36s 2025-09-19 07:14:03.336774 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.36s 2025-09-19 07:14:03.336831 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.22s 2025-09-19 07:14:03.336841 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.92s 2025-09-19 07:14:03.336851 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.65s 2025-09-19 07:14:03.336861 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.64s 2025-09-19 07:14:03.336871 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.54s 2025-09-19 07:14:03.336881 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.49s 2025-09-19 07:14:03.336891 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.28s 2025-09-19 07:14:03.336901 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.20s 2025-09-19 07:14:03.336910 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.86s 2025-09-19 07:14:03.336921 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-09-19 07:14:03.336930 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-09-19 07:14:03.336940 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2025-09-19 07:14:03.336950 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-09-19 07:14:03.336960 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-09-19 07:14:03.336970 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-09-19 07:14:03.336980 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-09-19 07:14:03.336990 | orchestrator | 2025-09-19 07:14:03 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:14:03.337000 | orchestrator | 2025-09-19 07:14:03 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:03.337016 | orchestrator | 2025-09-19 07:14:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:06.379817 | orchestrator | 2025-09-19 07:14:06 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:14:06.380966 | orchestrator | 2025-09-19 07:14:06 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:06.380989 | orchestrator | 2025-09-19 07:14:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:09.420244 | orchestrator | 2025-09-19 07:14:09 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:14:09.420460 | orchestrator | 2025-09-19 07:14:09 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:09.420483 | orchestrator | 2025-09-19 07:14:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:12.470718 | orchestrator | 2025-09-19 07:14:12 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:14:12.472365 | orchestrator | 2025-09-19 07:14:12 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:12.472430 | orchestrator | 2025-09-19 07:14:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:15.515249 | orchestrator | 2025-09-19 07:14:15 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:14:15.518394 | orchestrator | 2025-09-19 07:14:15 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:15.518439 | orchestrator | 2025-09-19 07:14:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:18.559383 | orchestrator | 2025-09-19 07:14:18 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:14:18.560843 | orchestrator | 2025-09-19 07:14:18 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:18.560886 | orchestrator | 2025-09-19 07:14:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:21.606714 | orchestrator | 2025-09-19 07:14:21 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state STARTED 2025-09-19 07:14:21.609613 | orchestrator | 2025-09-19 07:14:21 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:21.609653 | orchestrator | 2025-09-19 07:14:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:24.644621 | orchestrator | 2025-09-19 07:14:24 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:24.644726 | orchestrator | 2025-09-19 07:14:24 | INFO  | Task e2e6e675-8987-4a80-b485-c7d6c008ad81 is in state STARTED 2025-09-19 07:14:24.647092 | orchestrator | 2025-09-19 07:14:24 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:24.650454 | orchestrator | 2025-09-19 07:14:24 | INFO  | Task 246e242b-1ed3-48ba-8c96-44a05fb75ecd is in state SUCCESS 2025-09-19 07:14:24.652086 | orchestrator | 2025-09-19 07:14:24 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:24.652453 | orchestrator | 2025-09-19 07:14:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:27.684870 | orchestrator | 2025-09-19 07:14:27 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:27.686346 | orchestrator | 2025-09-19 07:14:27 | INFO  | Task e2e6e675-8987-4a80-b485-c7d6c008ad81 is in state STARTED 2025-09-19 07:14:27.687207 | orchestrator | 2025-09-19 07:14:27 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:27.688917 | orchestrator | 2025-09-19 07:14:27 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:27.688972 | orchestrator | 2025-09-19 07:14:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:30.729118 | orchestrator | 2025-09-19 07:14:30 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:30.730344 | orchestrator | 2025-09-19 07:14:30 | INFO  | Task e2e6e675-8987-4a80-b485-c7d6c008ad81 is in state SUCCESS 2025-09-19 07:14:30.732806 | orchestrator | 2025-09-19 07:14:30 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:14:30.733939 | orchestrator | 2025-09-19 07:14:30 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:30.737109 | orchestrator | 2025-09-19 07:14:30 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:14:30.738006 | orchestrator | 2025-09-19 07:14:30 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:30.738113 | orchestrator | 2025-09-19 07:14:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:33.775513 | orchestrator | 2025-09-19 07:14:33 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:33.777060 | orchestrator | 2025-09-19 07:14:33 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:14:33.779068 | orchestrator | 2025-09-19 07:14:33 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:33.780467 | orchestrator | 2025-09-19 07:14:33 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:14:33.781947 | orchestrator | 2025-09-19 07:14:33 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:33.781972 | orchestrator | 2025-09-19 07:14:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:36.811051 | orchestrator | 2025-09-19 07:14:36 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:36.811160 | orchestrator | 2025-09-19 07:14:36 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:14:36.812667 | orchestrator | 2025-09-19 07:14:36 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:36.812689 | orchestrator | 2025-09-19 07:14:36 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:14:36.813420 | orchestrator | 2025-09-19 07:14:36 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:36.813442 | orchestrator | 2025-09-19 07:14:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:39.849831 | orchestrator | 2025-09-19 07:14:39 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:39.850652 | orchestrator | 2025-09-19 07:14:39 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:14:39.852361 | orchestrator | 2025-09-19 07:14:39 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:39.853419 | orchestrator | 2025-09-19 07:14:39 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:14:39.854566 | orchestrator | 2025-09-19 07:14:39 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:39.854592 | orchestrator | 2025-09-19 07:14:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:42.961216 | orchestrator | 2025-09-19 07:14:42 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:42.962313 | orchestrator | 2025-09-19 07:14:42 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:14:42.964856 | orchestrator | 2025-09-19 07:14:42 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:42.966783 | orchestrator | 2025-09-19 07:14:42 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:14:42.968052 | orchestrator | 2025-09-19 07:14:42 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:42.968248 | orchestrator | 2025-09-19 07:14:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:46.010478 | orchestrator | 2025-09-19 07:14:46 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:46.013810 | orchestrator | 2025-09-19 07:14:46 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:14:46.015704 | orchestrator | 2025-09-19 07:14:46 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:46.018244 | orchestrator | 2025-09-19 07:14:46 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:14:46.019524 | orchestrator | 2025-09-19 07:14:46 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:46.019552 | orchestrator | 2025-09-19 07:14:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:49.052420 | orchestrator | 2025-09-19 07:14:49 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:49.054008 | orchestrator | 2025-09-19 07:14:49 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:14:49.055964 | orchestrator | 2025-09-19 07:14:49 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:49.057557 | orchestrator | 2025-09-19 07:14:49 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:14:49.061335 | orchestrator | 2025-09-19 07:14:49 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:49.061375 | orchestrator | 2025-09-19 07:14:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:52.100269 | orchestrator | 2025-09-19 07:14:52 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:52.100686 | orchestrator | 2025-09-19 07:14:52 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:14:52.102391 | orchestrator | 2025-09-19 07:14:52 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:52.102437 | orchestrator | 2025-09-19 07:14:52 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:14:52.102915 | orchestrator | 2025-09-19 07:14:52 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state STARTED 2025-09-19 07:14:52.103708 | orchestrator | 2025-09-19 07:14:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:55.138908 | orchestrator | 2025-09-19 07:14:55 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:55.139283 | orchestrator | 2025-09-19 07:14:55 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:14:55.140160 | orchestrator | 2025-09-19 07:14:55 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:55.141114 | orchestrator | 2025-09-19 07:14:55 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:14:55.141563 | orchestrator | 2025-09-19 07:14:55 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:14:55.143150 | orchestrator | 2025-09-19 07:14:55 | INFO  | Task 020288ee-69ab-4e28-9c0f-ef982cff53ea is in state SUCCESS 2025-09-19 07:14:55.144916 | orchestrator | 2025-09-19 07:14:55.144966 | orchestrator | 2025-09-19 07:14:55.144980 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-19 07:14:55.144993 | orchestrator | 2025-09-19 07:14:55.145004 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-19 07:14:55.145016 | orchestrator | Friday 19 September 2025 07:13:30 +0000 (0:00:00.232) 0:00:00.233 ****** 2025-09-19 07:14:55.145028 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-19 07:14:55.145041 | orchestrator | 2025-09-19 07:14:55.145052 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-19 07:14:55.145064 | orchestrator | Friday 19 September 2025 07:13:30 +0000 (0:00:00.254) 0:00:00.487 ****** 2025-09-19 07:14:55.145075 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-19 07:14:55.145532 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-19 07:14:55.145551 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-19 07:14:55.145563 | orchestrator | 2025-09-19 07:14:55.145600 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-19 07:14:55.145612 | orchestrator | Friday 19 September 2025 07:13:31 +0000 (0:00:01.281) 0:00:01.768 ****** 2025-09-19 07:14:55.145624 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-19 07:14:55.145635 | orchestrator | 2025-09-19 07:14:55.145647 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-19 07:14:55.145658 | orchestrator | Friday 19 September 2025 07:13:32 +0000 (0:00:01.164) 0:00:02.933 ****** 2025-09-19 07:14:55.145669 | orchestrator | changed: [testbed-manager] 2025-09-19 07:14:55.145681 | orchestrator | 2025-09-19 07:14:55.145693 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-19 07:14:55.145704 | orchestrator | Friday 19 September 2025 07:13:33 +0000 (0:00:01.003) 0:00:03.936 ****** 2025-09-19 07:14:55.145715 | orchestrator | changed: [testbed-manager] 2025-09-19 07:14:55.145726 | orchestrator | 2025-09-19 07:14:55.145737 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-19 07:14:55.145775 | orchestrator | Friday 19 September 2025 07:13:34 +0000 (0:00:00.832) 0:00:04.769 ****** 2025-09-19 07:14:55.145786 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-19 07:14:55.145798 | orchestrator | ok: [testbed-manager] 2025-09-19 07:14:55.145810 | orchestrator | 2025-09-19 07:14:55.145821 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-19 07:14:55.145833 | orchestrator | Friday 19 September 2025 07:14:13 +0000 (0:00:39.037) 0:00:43.806 ****** 2025-09-19 07:14:55.145844 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-19 07:14:55.145856 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-19 07:14:55.145868 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-19 07:14:55.145879 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-19 07:14:55.145891 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-19 07:14:55.145902 | orchestrator | 2025-09-19 07:14:55.145914 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-19 07:14:55.145925 | orchestrator | Friday 19 September 2025 07:14:17 +0000 (0:00:03.835) 0:00:47.642 ****** 2025-09-19 07:14:55.145936 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-19 07:14:55.145961 | orchestrator | 2025-09-19 07:14:55.145985 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-19 07:14:55.145996 | orchestrator | Friday 19 September 2025 07:14:17 +0000 (0:00:00.405) 0:00:48.047 ****** 2025-09-19 07:14:55.146008 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:14:55.146063 | orchestrator | 2025-09-19 07:14:55.146077 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-19 07:14:55.146089 | orchestrator | Friday 19 September 2025 07:14:18 +0000 (0:00:00.128) 0:00:48.176 ****** 2025-09-19 07:14:55.146100 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:14:55.146111 | orchestrator | 2025-09-19 07:14:55.146123 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-19 07:14:55.146134 | orchestrator | Friday 19 September 2025 07:14:18 +0000 (0:00:00.266) 0:00:48.442 ****** 2025-09-19 07:14:55.146146 | orchestrator | changed: [testbed-manager] 2025-09-19 07:14:55.146160 | orchestrator | 2025-09-19 07:14:55.146173 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-19 07:14:55.146185 | orchestrator | Friday 19 September 2025 07:14:20 +0000 (0:00:02.560) 0:00:51.003 ****** 2025-09-19 07:14:55.146198 | orchestrator | changed: [testbed-manager] 2025-09-19 07:14:55.146211 | orchestrator | 2025-09-19 07:14:55.146224 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-19 07:14:55.146236 | orchestrator | Friday 19 September 2025 07:14:21 +0000 (0:00:00.623) 0:00:51.626 ****** 2025-09-19 07:14:55.146249 | orchestrator | changed: [testbed-manager] 2025-09-19 07:14:55.146261 | orchestrator | 2025-09-19 07:14:55.146274 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-19 07:14:55.146308 | orchestrator | Friday 19 September 2025 07:14:22 +0000 (0:00:00.599) 0:00:52.225 ****** 2025-09-19 07:14:55.146321 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-19 07:14:55.146334 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-19 07:14:55.146347 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-19 07:14:55.146360 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-19 07:14:55.146373 | orchestrator | 2025-09-19 07:14:55.146385 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:14:55.146399 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:14:55.146413 | orchestrator | 2025-09-19 07:14:55.146425 | orchestrator | 2025-09-19 07:14:55.146482 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:14:55.146497 | orchestrator | Friday 19 September 2025 07:14:23 +0000 (0:00:01.332) 0:00:53.558 ****** 2025-09-19 07:14:55.146510 | orchestrator | =============================================================================== 2025-09-19 07:14:55.146521 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.04s 2025-09-19 07:14:55.146532 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.84s 2025-09-19 07:14:55.146543 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.56s 2025-09-19 07:14:55.146554 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.33s 2025-09-19 07:14:55.146565 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.28s 2025-09-19 07:14:55.146576 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.16s 2025-09-19 07:14:55.146587 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.00s 2025-09-19 07:14:55.146598 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.83s 2025-09-19 07:14:55.146609 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.62s 2025-09-19 07:14:55.146620 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2025-09-19 07:14:55.146631 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.41s 2025-09-19 07:14:55.146642 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.27s 2025-09-19 07:14:55.146653 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2025-09-19 07:14:55.146664 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-19 07:14:55.146675 | orchestrator | 2025-09-19 07:14:55.146687 | orchestrator | 2025-09-19 07:14:55.146698 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:14:55.146709 | orchestrator | 2025-09-19 07:14:55.146720 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:14:55.146731 | orchestrator | Friday 19 September 2025 07:14:27 +0000 (0:00:00.172) 0:00:00.172 ****** 2025-09-19 07:14:55.146742 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:55.146785 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:55.146796 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:55.146807 | orchestrator | 2025-09-19 07:14:55.146819 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:14:55.146830 | orchestrator | Friday 19 September 2025 07:14:27 +0000 (0:00:00.261) 0:00:00.434 ****** 2025-09-19 07:14:55.146841 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 07:14:55.146852 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 07:14:55.146863 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 07:14:55.146875 | orchestrator | 2025-09-19 07:14:55.146886 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-19 07:14:55.146897 | orchestrator | 2025-09-19 07:14:55.146908 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-19 07:14:55.146928 | orchestrator | Friday 19 September 2025 07:14:27 +0000 (0:00:00.540) 0:00:00.975 ****** 2025-09-19 07:14:55.146939 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:55.146950 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:55.146961 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:55.146972 | orchestrator | 2025-09-19 07:14:55.146983 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:14:55.146995 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:14:55.147007 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:14:55.147018 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:14:55.147029 | orchestrator | 2025-09-19 07:14:55.147040 | orchestrator | 2025-09-19 07:14:55.147052 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:14:55.147063 | orchestrator | Friday 19 September 2025 07:14:28 +0000 (0:00:00.694) 0:00:01.670 ****** 2025-09-19 07:14:55.147074 | orchestrator | =============================================================================== 2025-09-19 07:14:55.147085 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.70s 2025-09-19 07:14:55.147096 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-09-19 07:14:55.147107 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-09-19 07:14:55.147118 | orchestrator | 2025-09-19 07:14:55.147129 | orchestrator | 2025-09-19 07:14:55.147140 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:14:55.147151 | orchestrator | 2025-09-19 07:14:55.147162 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:14:55.147179 | orchestrator | Friday 19 September 2025 07:12:14 +0000 (0:00:00.270) 0:00:00.270 ****** 2025-09-19 07:14:55.147190 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:55.147201 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:55.147212 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:55.147223 | orchestrator | 2025-09-19 07:14:55.147235 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:14:55.147246 | orchestrator | Friday 19 September 2025 07:12:14 +0000 (0:00:00.307) 0:00:00.578 ****** 2025-09-19 07:14:55.147257 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 07:14:55.147268 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 07:14:55.147280 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 07:14:55.147291 | orchestrator | 2025-09-19 07:14:55.147302 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-19 07:14:55.147313 | orchestrator | 2025-09-19 07:14:55.147359 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:14:55.147372 | orchestrator | Friday 19 September 2025 07:12:15 +0000 (0:00:00.405) 0:00:00.983 ****** 2025-09-19 07:14:55.147384 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:14:55.147395 | orchestrator | 2025-09-19 07:14:55.147406 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-19 07:14:55.147418 | orchestrator | Friday 19 September 2025 07:12:15 +0000 (0:00:00.529) 0:00:01.513 ****** 2025-09-19 07:14:55.147435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.147459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.147479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.147523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.147538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.147551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.147569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.147581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.147593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.147605 | orchestrator | 2025-09-19 07:14:55.147616 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-19 07:14:55.147628 | orchestrator | Friday 19 September 2025 07:12:17 +0000 (0:00:01.918) 0:00:03.432 ****** 2025-09-19 07:14:55.147639 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-19 07:14:55.147650 | orchestrator | 2025-09-19 07:14:55.147666 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-19 07:14:55.147678 | orchestrator | Friday 19 September 2025 07:12:18 +0000 (0:00:00.884) 0:00:04.317 ****** 2025-09-19 07:14:55.147689 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:55.147700 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:55.147711 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:55.147722 | orchestrator | 2025-09-19 07:14:55.147733 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-19 07:14:55.147798 | orchestrator | Friday 19 September 2025 07:12:18 +0000 (0:00:00.472) 0:00:04.789 ****** 2025-09-19 07:14:55.147812 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:14:55.147823 | orchestrator | 2025-09-19 07:14:55.147834 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:14:55.147852 | orchestrator | Friday 19 September 2025 07:12:19 +0000 (0:00:00.659) 0:00:05.449 ****** 2025-09-19 07:14:55.147863 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:14:55.147882 | orchestrator | 2025-09-19 07:14:55.147893 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-19 07:14:55.147904 | orchestrator | Friday 19 September 2025 07:12:20 +0000 (0:00:00.526) 0:00:05.976 ****** 2025-09-19 07:14:55.147916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.147930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.147943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.147961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.147990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148049 | orchestrator | 2025-09-19 07:14:55.148060 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-19 07:14:55.148072 | orchestrator | Friday 19 September 2025 07:12:23 +0000 (0:00:03.512) 0:00:09.488 ****** 2025-09-19 07:14:55.148108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:14:55.148147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.148167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:14:55.148185 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.148205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:14:55.148223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.148250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:14:55.148279 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:55.148314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:14:55.148334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.148353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:14:55.148371 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:55.148383 | orchestrator | 2025-09-19 07:14:55.148393 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-19 07:14:55.148404 | orchestrator | Friday 19 September 2025 07:12:24 +0000 (0:00:00.551) 0:00:10.039 ****** 2025-09-19 07:14:55.148415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:14:55.148431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.148458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:14:55.148469 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.148479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:14:55.148490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.148501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:14:55.148511 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:55.148526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:14:55.148551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.148562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:14:55.148573 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:55.148583 | orchestrator | 2025-09-19 07:14:55.148593 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-19 07:14:55.148603 | orchestrator | Friday 19 September 2025 07:12:24 +0000 (0:00:00.786) 0:00:10.826 ****** 2025-09-19 07:14:55.148614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.148625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.148653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.148664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148739 | orchestrator | 2025-09-19 07:14:55.148778 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-19 07:14:55.148790 | orchestrator | Friday 19 September 2025 07:12:28 +0000 (0:00:03.470) 0:00:14.296 ****** 2025-09-19 07:14:55.148809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.148820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.148832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.148849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.148871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.148883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.148894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.148932 | orchestrator | 2025-09-19 07:14:55.148943 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-19 07:14:55.148953 | orchestrator | Friday 19 September 2025 07:12:33 +0000 (0:00:04.986) 0:00:19.283 ****** 2025-09-19 07:14:55.148963 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:55.148974 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:14:55.148984 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:14:55.148994 | orchestrator | 2025-09-19 07:14:55.149004 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-19 07:14:55.149014 | orchestrator | Friday 19 September 2025 07:12:34 +0000 (0:00:01.478) 0:00:20.762 ****** 2025-09-19 07:14:55.149024 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.149034 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:55.149044 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:55.149054 | orchestrator | 2025-09-19 07:14:55.149064 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-19 07:14:55.149074 | orchestrator | Friday 19 September 2025 07:12:35 +0000 (0:00:00.534) 0:00:21.296 ****** 2025-09-19 07:14:55.149084 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.149094 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:55.149108 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:55.149119 | orchestrator | 2025-09-19 07:14:55.149129 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-19 07:14:55.149139 | orchestrator | Friday 19 September 2025 07:12:35 +0000 (0:00:00.290) 0:00:21.587 ****** 2025-09-19 07:14:55.149149 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.149159 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:55.149169 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:55.149179 | orchestrator | 2025-09-19 07:14:55.149189 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-19 07:14:55.149199 | orchestrator | Friday 19 September 2025 07:12:36 +0000 (0:00:00.535) 0:00:22.122 ****** 2025-09-19 07:14:55.149221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.149241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.149276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.149297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.149331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.149344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:14:55.149354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.149371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.149382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.149392 | orchestrator | 2025-09-19 07:14:55.149402 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:14:55.149412 | orchestrator | Friday 19 September 2025 07:12:38 +0000 (0:00:02.270) 0:00:24.393 ****** 2025-09-19 07:14:55.149422 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.149432 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:55.149442 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:55.149452 | orchestrator | 2025-09-19 07:14:55.149462 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-19 07:14:55.149472 | orchestrator | Friday 19 September 2025 07:12:38 +0000 (0:00:00.281) 0:00:24.675 ****** 2025-09-19 07:14:55.149482 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 07:14:55.149493 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 07:14:55.149503 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 07:14:55.149513 | orchestrator | 2025-09-19 07:14:55.149523 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-19 07:14:55.149537 | orchestrator | Friday 19 September 2025 07:12:40 +0000 (0:00:02.055) 0:00:26.730 ****** 2025-09-19 07:14:55.149547 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:14:55.149557 | orchestrator | 2025-09-19 07:14:55.149567 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-19 07:14:55.149577 | orchestrator | Friday 19 September 2025 07:12:42 +0000 (0:00:01.533) 0:00:28.263 ****** 2025-09-19 07:14:55.149587 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.149597 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:55.149607 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:55.149617 | orchestrator | 2025-09-19 07:14:55.149627 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-19 07:14:55.149637 | orchestrator | Friday 19 September 2025 07:12:42 +0000 (0:00:00.563) 0:00:28.827 ****** 2025-09-19 07:14:55.149647 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 07:14:55.149662 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:14:55.149673 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 07:14:55.149683 | orchestrator | 2025-09-19 07:14:55.149693 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-19 07:14:55.149703 | orchestrator | Friday 19 September 2025 07:12:44 +0000 (0:00:01.106) 0:00:29.934 ****** 2025-09-19 07:14:55.149713 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:55.149723 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:55.149739 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:55.149774 | orchestrator | 2025-09-19 07:14:55.149785 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-19 07:14:55.149795 | orchestrator | Friday 19 September 2025 07:12:44 +0000 (0:00:00.302) 0:00:30.237 ****** 2025-09-19 07:14:55.149805 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 07:14:55.149815 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 07:14:55.149824 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 07:14:55.149834 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 07:14:55.149844 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 07:14:55.149855 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 07:14:55.149872 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 07:14:55.149889 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 07:14:55.149905 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 07:14:55.149921 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 07:14:55.149935 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 07:14:55.149950 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 07:14:55.149965 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 07:14:55.149980 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 07:14:55.149996 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 07:14:55.150011 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:14:55.150072 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:14:55.150091 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:14:55.150108 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:14:55.150124 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:14:55.150141 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:14:55.150158 | orchestrator | 2025-09-19 07:14:55.150174 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-19 07:14:55.150185 | orchestrator | Friday 19 September 2025 07:12:53 +0000 (0:00:09.134) 0:00:39.371 ****** 2025-09-19 07:14:55.150195 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:14:55.150205 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:14:55.150215 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:14:55.150224 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:14:55.150234 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:14:55.150244 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:14:55.150253 | orchestrator | 2025-09-19 07:14:55.150263 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-19 07:14:55.150283 | orchestrator | Friday 19 September 2025 07:12:56 +0000 (0:00:02.590) 0:00:41.962 ****** 2025-09-19 07:14:55.150311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.150325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.150336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:14:55.150348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.150362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.150386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:14:55.150397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.150408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.150419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:14:55.150429 | orchestrator | 2025-09-19 07:14:55.150439 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:14:55.150449 | orchestrator | Friday 19 September 2025 07:12:58 +0000 (0:00:02.115) 0:00:44.078 ****** 2025-09-19 07:14:55.150459 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.150469 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:55.150479 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:55.150489 | orchestrator | 2025-09-19 07:14:55.150499 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-19 07:14:55.150509 | orchestrator | Friday 19 September 2025 07:12:58 +0000 (0:00:00.239) 0:00:44.317 ****** 2025-09-19 07:14:55.150519 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:55.150529 | orchestrator | 2025-09-19 07:14:55.150538 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-19 07:14:55.150554 | orchestrator | Friday 19 September 2025 07:13:00 +0000 (0:00:02.134) 0:00:46.452 ****** 2025-09-19 07:14:55.150564 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:55.150574 | orchestrator | 2025-09-19 07:14:55.150583 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-19 07:14:55.150593 | orchestrator | Friday 19 September 2025 07:13:02 +0000 (0:00:02.117) 0:00:48.570 ****** 2025-09-19 07:14:55.150603 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:55.150613 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:55.150623 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:55.150632 | orchestrator | 2025-09-19 07:14:55.150642 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-19 07:14:55.150652 | orchestrator | Friday 19 September 2025 07:13:03 +0000 (0:00:01.099) 0:00:49.670 ****** 2025-09-19 07:14:55.150662 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:55.150672 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:55.150681 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:55.150691 | orchestrator | 2025-09-19 07:14:55.150701 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-19 07:14:55.150715 | orchestrator | Friday 19 September 2025 07:13:04 +0000 (0:00:00.305) 0:00:49.976 ****** 2025-09-19 07:14:55.150725 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.150735 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:55.150810 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:55.150829 | orchestrator | 2025-09-19 07:14:55.150845 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-19 07:14:55.150862 | orchestrator | Friday 19 September 2025 07:13:04 +0000 (0:00:00.279) 0:00:50.256 ****** 2025-09-19 07:14:55.150878 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:55.150895 | orchestrator | 2025-09-19 07:14:55.150912 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-19 07:14:55.150927 | orchestrator | Friday 19 September 2025 07:13:18 +0000 (0:00:14.075) 0:01:04.331 ****** 2025-09-19 07:14:55.150937 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:55.150947 | orchestrator | 2025-09-19 07:14:55.150965 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 07:14:55.150975 | orchestrator | Friday 19 September 2025 07:13:28 +0000 (0:00:10.160) 0:01:14.492 ****** 2025-09-19 07:14:55.150985 | orchestrator | 2025-09-19 07:14:55.150995 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 07:14:55.151005 | orchestrator | Friday 19 September 2025 07:13:28 +0000 (0:00:00.085) 0:01:14.577 ****** 2025-09-19 07:14:55.151015 | orchestrator | 2025-09-19 07:14:55.151025 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 07:14:55.151035 | orchestrator | Friday 19 September 2025 07:13:28 +0000 (0:00:00.268) 0:01:14.846 ****** 2025-09-19 07:14:55.151045 | orchestrator | 2025-09-19 07:14:55.151054 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-19 07:14:55.151064 | orchestrator | Friday 19 September 2025 07:13:29 +0000 (0:00:00.070) 0:01:14.916 ****** 2025-09-19 07:14:55.151074 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:55.151084 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:14:55.151094 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:14:55.151104 | orchestrator | 2025-09-19 07:14:55.151114 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-19 07:14:55.151123 | orchestrator | Friday 19 September 2025 07:13:51 +0000 (0:00:22.076) 0:01:36.993 ****** 2025-09-19 07:14:55.151133 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:55.151143 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:14:55.151153 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:14:55.151163 | orchestrator | 2025-09-19 07:14:55.151172 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-19 07:14:55.151182 | orchestrator | Friday 19 September 2025 07:14:01 +0000 (0:00:10.223) 0:01:47.216 ****** 2025-09-19 07:14:55.151192 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:55.151211 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:14:55.151221 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:14:55.151231 | orchestrator | 2025-09-19 07:14:55.151241 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:14:55.151250 | orchestrator | Friday 19 September 2025 07:14:07 +0000 (0:00:05.917) 0:01:53.134 ****** 2025-09-19 07:14:55.151260 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:14:55.151270 | orchestrator | 2025-09-19 07:14:55.151280 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-19 07:14:55.151290 | orchestrator | Friday 19 September 2025 07:14:07 +0000 (0:00:00.759) 0:01:53.893 ****** 2025-09-19 07:14:55.151300 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:14:55.151310 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:55.151319 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:14:55.151329 | orchestrator | 2025-09-19 07:14:55.151339 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-19 07:14:55.151349 | orchestrator | Friday 19 September 2025 07:14:08 +0000 (0:00:00.810) 0:01:54.704 ****** 2025-09-19 07:14:55.151357 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:14:55.151365 | orchestrator | 2025-09-19 07:14:55.151373 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-19 07:14:55.151381 | orchestrator | Friday 19 September 2025 07:14:10 +0000 (0:00:01.761) 0:01:56.465 ****** 2025-09-19 07:14:55.151389 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-19 07:14:55.151398 | orchestrator | 2025-09-19 07:14:55.151406 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-19 07:14:55.151414 | orchestrator | Friday 19 September 2025 07:14:21 +0000 (0:00:10.533) 0:02:06.999 ****** 2025-09-19 07:14:55.151422 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-19 07:14:55.151430 | orchestrator | 2025-09-19 07:14:55.151438 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-19 07:14:55.151446 | orchestrator | Friday 19 September 2025 07:14:41 +0000 (0:00:20.752) 0:02:27.751 ****** 2025-09-19 07:14:55.151454 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-19 07:14:55.151462 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-19 07:14:55.151470 | orchestrator | 2025-09-19 07:14:55.151478 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-19 07:14:55.151486 | orchestrator | Friday 19 September 2025 07:14:48 +0000 (0:00:06.854) 0:02:34.606 ****** 2025-09-19 07:14:55.151494 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.151502 | orchestrator | 2025-09-19 07:14:55.151510 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-19 07:14:55.151518 | orchestrator | Friday 19 September 2025 07:14:48 +0000 (0:00:00.110) 0:02:34.716 ****** 2025-09-19 07:14:55.151526 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.151534 | orchestrator | 2025-09-19 07:14:55.151543 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-19 07:14:55.151551 | orchestrator | Friday 19 September 2025 07:14:49 +0000 (0:00:00.322) 0:02:35.038 ****** 2025-09-19 07:14:55.151559 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.151567 | orchestrator | 2025-09-19 07:14:55.151575 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-19 07:14:55.151588 | orchestrator | Friday 19 September 2025 07:14:49 +0000 (0:00:00.132) 0:02:35.170 ****** 2025-09-19 07:14:55.151596 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.151604 | orchestrator | 2025-09-19 07:14:55.151616 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-19 07:14:55.151630 | orchestrator | Friday 19 September 2025 07:14:49 +0000 (0:00:00.324) 0:02:35.495 ****** 2025-09-19 07:14:55.151643 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:14:55.151664 | orchestrator | 2025-09-19 07:14:55.151677 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:14:55.151690 | orchestrator | Friday 19 September 2025 07:14:52 +0000 (0:00:02.836) 0:02:38.332 ****** 2025-09-19 07:14:55.151702 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:14:55.151715 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:14:55.151728 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:14:55.151741 | orchestrator | 2025-09-19 07:14:55.151781 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:14:55.151796 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-19 07:14:55.151810 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 07:14:55.151825 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 07:14:55.151838 | orchestrator | 2025-09-19 07:14:55.151850 | orchestrator | 2025-09-19 07:14:55.151865 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:14:55.151874 | orchestrator | Friday 19 September 2025 07:14:53 +0000 (0:00:00.851) 0:02:39.183 ****** 2025-09-19 07:14:55.151883 | orchestrator | =============================================================================== 2025-09-19 07:14:55.151891 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 22.08s 2025-09-19 07:14:55.151898 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.75s 2025-09-19 07:14:55.151906 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.08s 2025-09-19 07:14:55.151914 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.53s 2025-09-19 07:14:55.151923 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.22s 2025-09-19 07:14:55.151930 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.16s 2025-09-19 07:14:55.151939 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.13s 2025-09-19 07:14:55.151947 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.85s 2025-09-19 07:14:55.151955 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.92s 2025-09-19 07:14:55.151963 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.99s 2025-09-19 07:14:55.151971 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.51s 2025-09-19 07:14:55.151979 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.47s 2025-09-19 07:14:55.151987 | orchestrator | keystone : Creating default user role ----------------------------------- 2.84s 2025-09-19 07:14:55.151995 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.59s 2025-09-19 07:14:55.152003 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.27s 2025-09-19 07:14:55.152011 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.13s 2025-09-19 07:14:55.152019 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.12s 2025-09-19 07:14:55.152027 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.12s 2025-09-19 07:14:55.152035 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.06s 2025-09-19 07:14:55.152043 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.92s 2025-09-19 07:14:55.152051 | orchestrator | 2025-09-19 07:14:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:58.257149 | orchestrator | 2025-09-19 07:14:58 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:14:58.257416 | orchestrator | 2025-09-19 07:14:58 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:14:58.258147 | orchestrator | 2025-09-19 07:14:58 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:14:58.258879 | orchestrator | 2025-09-19 07:14:58 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:14:58.259605 | orchestrator | 2025-09-19 07:14:58 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:14:58.259628 | orchestrator | 2025-09-19 07:14:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:01.294927 | orchestrator | 2025-09-19 07:15:01 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:01.295017 | orchestrator | 2025-09-19 07:15:01 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:01.295048 | orchestrator | 2025-09-19 07:15:01 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:01.295069 | orchestrator | 2025-09-19 07:15:01 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:01.295267 | orchestrator | 2025-09-19 07:15:01 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:15:01.297259 | orchestrator | 2025-09-19 07:15:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:04.320467 | orchestrator | 2025-09-19 07:15:04 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:04.321644 | orchestrator | 2025-09-19 07:15:04 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:04.322653 | orchestrator | 2025-09-19 07:15:04 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:04.323334 | orchestrator | 2025-09-19 07:15:04 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:04.324181 | orchestrator | 2025-09-19 07:15:04 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:15:04.324211 | orchestrator | 2025-09-19 07:15:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:07.370799 | orchestrator | 2025-09-19 07:15:07 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:07.371669 | orchestrator | 2025-09-19 07:15:07 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:07.372379 | orchestrator | 2025-09-19 07:15:07 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:07.373533 | orchestrator | 2025-09-19 07:15:07 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:07.375072 | orchestrator | 2025-09-19 07:15:07 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:15:07.375662 | orchestrator | 2025-09-19 07:15:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:10.408494 | orchestrator | 2025-09-19 07:15:10 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:10.408913 | orchestrator | 2025-09-19 07:15:10 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:10.410723 | orchestrator | 2025-09-19 07:15:10 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:10.412204 | orchestrator | 2025-09-19 07:15:10 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:10.414436 | orchestrator | 2025-09-19 07:15:10 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:15:10.414462 | orchestrator | 2025-09-19 07:15:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:13.451890 | orchestrator | 2025-09-19 07:15:13 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:13.451997 | orchestrator | 2025-09-19 07:15:13 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:13.452541 | orchestrator | 2025-09-19 07:15:13 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:13.453113 | orchestrator | 2025-09-19 07:15:13 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:13.453888 | orchestrator | 2025-09-19 07:15:13 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state STARTED 2025-09-19 07:15:13.453925 | orchestrator | 2025-09-19 07:15:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:16.484973 | orchestrator | 2025-09-19 07:15:16 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:16.485073 | orchestrator | 2025-09-19 07:15:16 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:16.485719 | orchestrator | 2025-09-19 07:15:16 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:16.486520 | orchestrator | 2025-09-19 07:15:16 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:16.487156 | orchestrator | 2025-09-19 07:15:16 | INFO  | Task 2d9d0847-c2c7-415c-a14d-46707cb4ea47 is in state SUCCESS 2025-09-19 07:15:16.487991 | orchestrator | 2025-09-19 07:15:16 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:16.488014 | orchestrator | 2025-09-19 07:15:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:19.513263 | orchestrator | 2025-09-19 07:15:19 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:19.513370 | orchestrator | 2025-09-19 07:15:19 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:19.513713 | orchestrator | 2025-09-19 07:15:19 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:19.514304 | orchestrator | 2025-09-19 07:15:19 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:19.515245 | orchestrator | 2025-09-19 07:15:19 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:19.515335 | orchestrator | 2025-09-19 07:15:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:22.545906 | orchestrator | 2025-09-19 07:15:22 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:22.546086 | orchestrator | 2025-09-19 07:15:22 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:22.546937 | orchestrator | 2025-09-19 07:15:22 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:22.547520 | orchestrator | 2025-09-19 07:15:22 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:22.548509 | orchestrator | 2025-09-19 07:15:22 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:22.548530 | orchestrator | 2025-09-19 07:15:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:25.581019 | orchestrator | 2025-09-19 07:15:25 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:25.581801 | orchestrator | 2025-09-19 07:15:25 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:25.582438 | orchestrator | 2025-09-19 07:15:25 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:25.583779 | orchestrator | 2025-09-19 07:15:25 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:25.584329 | orchestrator | 2025-09-19 07:15:25 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:25.584565 | orchestrator | 2025-09-19 07:15:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:28.607475 | orchestrator | 2025-09-19 07:15:28 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:28.607598 | orchestrator | 2025-09-19 07:15:28 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:28.608224 | orchestrator | 2025-09-19 07:15:28 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:28.609710 | orchestrator | 2025-09-19 07:15:28 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:28.609980 | orchestrator | 2025-09-19 07:15:28 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:28.610004 | orchestrator | 2025-09-19 07:15:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:31.632030 | orchestrator | 2025-09-19 07:15:31 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:31.632127 | orchestrator | 2025-09-19 07:15:31 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:31.632545 | orchestrator | 2025-09-19 07:15:31 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:31.633050 | orchestrator | 2025-09-19 07:15:31 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:31.633659 | orchestrator | 2025-09-19 07:15:31 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:31.634462 | orchestrator | 2025-09-19 07:15:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:34.658275 | orchestrator | 2025-09-19 07:15:34 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:34.658374 | orchestrator | 2025-09-19 07:15:34 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:34.658865 | orchestrator | 2025-09-19 07:15:34 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:34.659345 | orchestrator | 2025-09-19 07:15:34 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:34.659979 | orchestrator | 2025-09-19 07:15:34 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:34.660002 | orchestrator | 2025-09-19 07:15:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:37.680029 | orchestrator | 2025-09-19 07:15:37 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:37.680261 | orchestrator | 2025-09-19 07:15:37 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:37.680715 | orchestrator | 2025-09-19 07:15:37 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:37.681277 | orchestrator | 2025-09-19 07:15:37 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:37.682558 | orchestrator | 2025-09-19 07:15:37 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:37.682649 | orchestrator | 2025-09-19 07:15:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:40.706237 | orchestrator | 2025-09-19 07:15:40 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:40.706718 | orchestrator | 2025-09-19 07:15:40 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:40.707002 | orchestrator | 2025-09-19 07:15:40 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:40.707715 | orchestrator | 2025-09-19 07:15:40 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:40.708374 | orchestrator | 2025-09-19 07:15:40 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:40.708396 | orchestrator | 2025-09-19 07:15:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:43.733411 | orchestrator | 2025-09-19 07:15:43 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:43.733980 | orchestrator | 2025-09-19 07:15:43 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:43.734462 | orchestrator | 2025-09-19 07:15:43 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:43.735402 | orchestrator | 2025-09-19 07:15:43 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:43.736252 | orchestrator | 2025-09-19 07:15:43 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:43.736406 | orchestrator | 2025-09-19 07:15:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:46.775634 | orchestrator | 2025-09-19 07:15:46 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:46.775800 | orchestrator | 2025-09-19 07:15:46 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:46.775965 | orchestrator | 2025-09-19 07:15:46 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:46.776648 | orchestrator | 2025-09-19 07:15:46 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:46.777300 | orchestrator | 2025-09-19 07:15:46 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:46.777325 | orchestrator | 2025-09-19 07:15:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:49.809039 | orchestrator | 2025-09-19 07:15:49 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state STARTED 2025-09-19 07:15:49.809129 | orchestrator | 2025-09-19 07:15:49 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:49.810577 | orchestrator | 2025-09-19 07:15:49 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:49.811090 | orchestrator | 2025-09-19 07:15:49 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:49.811610 | orchestrator | 2025-09-19 07:15:49 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:49.811630 | orchestrator | 2025-09-19 07:15:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:52.835262 | orchestrator | 2025-09-19 07:15:52.835372 | orchestrator | 2025-09-19 07:15:52.835380 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:15:52.835385 | orchestrator | 2025-09-19 07:15:52.835390 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:15:52.835396 | orchestrator | Friday 19 September 2025 07:14:34 +0000 (0:00:00.287) 0:00:00.287 ****** 2025-09-19 07:15:52.835401 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:52.835407 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:52.835411 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:52.835416 | orchestrator | ok: [testbed-manager] 2025-09-19 07:15:52.835421 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:15:52.835425 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:15:52.835430 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:15:52.835451 | orchestrator | 2025-09-19 07:15:52.835466 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:15:52.835472 | orchestrator | Friday 19 September 2025 07:14:35 +0000 (0:00:00.784) 0:00:01.071 ****** 2025-09-19 07:15:52.835476 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-19 07:15:52.835482 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-19 07:15:52.835487 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-19 07:15:52.835493 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-19 07:15:52.835501 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-19 07:15:52.835508 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-19 07:15:52.835517 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-19 07:15:52.835524 | orchestrator | 2025-09-19 07:15:52.835532 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 07:15:52.835540 | orchestrator | 2025-09-19 07:15:52.835548 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-19 07:15:52.835556 | orchestrator | Friday 19 September 2025 07:14:36 +0000 (0:00:00.857) 0:00:01.928 ****** 2025-09-19 07:15:52.835564 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:15:52.835573 | orchestrator | 2025-09-19 07:15:52.835580 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-19 07:15:52.835588 | orchestrator | Friday 19 September 2025 07:14:38 +0000 (0:00:02.555) 0:00:04.484 ****** 2025-09-19 07:15:52.835595 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-19 07:15:52.835602 | orchestrator | 2025-09-19 07:15:52.835609 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-19 07:15:52.835616 | orchestrator | Friday 19 September 2025 07:14:49 +0000 (0:00:10.512) 0:00:14.997 ****** 2025-09-19 07:15:52.835624 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-19 07:15:52.835634 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-19 07:15:52.835641 | orchestrator | 2025-09-19 07:15:52.835695 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-19 07:15:52.835706 | orchestrator | Friday 19 September 2025 07:14:55 +0000 (0:00:06.073) 0:00:21.070 ****** 2025-09-19 07:15:52.835713 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:15:52.835785 | orchestrator | 2025-09-19 07:15:52.835795 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-19 07:15:52.835802 | orchestrator | Friday 19 September 2025 07:14:58 +0000 (0:00:03.521) 0:00:24.592 ****** 2025-09-19 07:15:52.835810 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:15:52.835817 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-19 07:15:52.835824 | orchestrator | 2025-09-19 07:15:52.835832 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-19 07:15:52.835839 | orchestrator | Friday 19 September 2025 07:15:03 +0000 (0:00:04.252) 0:00:28.844 ****** 2025-09-19 07:15:52.835846 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:15:52.835854 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-19 07:15:52.835862 | orchestrator | 2025-09-19 07:15:52.835869 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-19 07:15:52.835876 | orchestrator | Friday 19 September 2025 07:15:09 +0000 (0:00:06.483) 0:00:35.328 ****** 2025-09-19 07:15:52.835884 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-19 07:15:52.835891 | orchestrator | 2025-09-19 07:15:52.835898 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:15:52.835917 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:52.835926 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:52.835936 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:52.835944 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:52.835953 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:52.835978 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:52.835987 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:52.835994 | orchestrator | 2025-09-19 07:15:52.836001 | orchestrator | 2025-09-19 07:15:52.836008 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:15:52.836015 | orchestrator | Friday 19 September 2025 07:15:15 +0000 (0:00:05.495) 0:00:40.823 ****** 2025-09-19 07:15:52.836023 | orchestrator | =============================================================================== 2025-09-19 07:15:52.836030 | orchestrator | service-ks-register : ceph-rgw | Creating services --------------------- 10.51s 2025-09-19 07:15:52.836038 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.48s 2025-09-19 07:15:52.836045 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.07s 2025-09-19 07:15:52.836058 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.50s 2025-09-19 07:15:52.836066 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.25s 2025-09-19 07:15:52.836073 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.53s 2025-09-19 07:15:52.836080 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.56s 2025-09-19 07:15:52.836087 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2025-09-19 07:15:52.836094 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2025-09-19 07:15:52.836101 | orchestrator | 2025-09-19 07:15:52.836108 | orchestrator | 2025-09-19 07:15:52.836115 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-19 07:15:52.836123 | orchestrator | 2025-09-19 07:15:52.836130 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-19 07:15:52.836138 | orchestrator | Friday 19 September 2025 07:14:27 +0000 (0:00:00.244) 0:00:00.244 ****** 2025-09-19 07:15:52.836146 | orchestrator | changed: [testbed-manager] 2025-09-19 07:15:52.836153 | orchestrator | 2025-09-19 07:15:52.836161 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-19 07:15:52.836168 | orchestrator | Friday 19 September 2025 07:14:29 +0000 (0:00:01.829) 0:00:02.074 ****** 2025-09-19 07:15:52.836176 | orchestrator | changed: [testbed-manager] 2025-09-19 07:15:52.836184 | orchestrator | 2025-09-19 07:15:52.836193 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-19 07:15:52.836200 | orchestrator | Friday 19 September 2025 07:14:30 +0000 (0:00:00.932) 0:00:03.007 ****** 2025-09-19 07:15:52.836208 | orchestrator | changed: [testbed-manager] 2025-09-19 07:15:52.836215 | orchestrator | 2025-09-19 07:15:52.836223 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-19 07:15:52.836230 | orchestrator | Friday 19 September 2025 07:14:31 +0000 (0:00:01.375) 0:00:04.382 ****** 2025-09-19 07:15:52.836237 | orchestrator | changed: [testbed-manager] 2025-09-19 07:15:52.836245 | orchestrator | 2025-09-19 07:15:52.836253 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-19 07:15:52.836270 | orchestrator | Friday 19 September 2025 07:14:32 +0000 (0:00:01.379) 0:00:05.761 ****** 2025-09-19 07:15:52.836278 | orchestrator | changed: [testbed-manager] 2025-09-19 07:15:52.836286 | orchestrator | 2025-09-19 07:15:52.836294 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-19 07:15:52.836302 | orchestrator | Friday 19 September 2025 07:14:33 +0000 (0:00:01.114) 0:00:06.876 ****** 2025-09-19 07:15:52.836310 | orchestrator | changed: [testbed-manager] 2025-09-19 07:15:52.836319 | orchestrator | 2025-09-19 07:15:52.836327 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-19 07:15:52.836335 | orchestrator | Friday 19 September 2025 07:14:34 +0000 (0:00:00.881) 0:00:07.757 ****** 2025-09-19 07:15:52.836344 | orchestrator | changed: [testbed-manager] 2025-09-19 07:15:52.836352 | orchestrator | 2025-09-19 07:15:52.836360 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-19 07:15:52.836368 | orchestrator | Friday 19 September 2025 07:14:36 +0000 (0:00:01.159) 0:00:08.917 ****** 2025-09-19 07:15:52.836376 | orchestrator | changed: [testbed-manager] 2025-09-19 07:15:52.836385 | orchestrator | 2025-09-19 07:15:52.836392 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-19 07:15:52.836400 | orchestrator | Friday 19 September 2025 07:14:37 +0000 (0:00:01.170) 0:00:10.088 ****** 2025-09-19 07:15:52.836407 | orchestrator | changed: [testbed-manager] 2025-09-19 07:15:52.836415 | orchestrator | 2025-09-19 07:15:52.836422 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-19 07:15:52.836429 | orchestrator | Friday 19 September 2025 07:15:26 +0000 (0:00:49.428) 0:00:59.516 ****** 2025-09-19 07:15:52.836437 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:15:52.836444 | orchestrator | 2025-09-19 07:15:52.836451 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 07:15:52.836457 | orchestrator | 2025-09-19 07:15:52.836464 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 07:15:52.836471 | orchestrator | Friday 19 September 2025 07:15:26 +0000 (0:00:00.123) 0:00:59.639 ****** 2025-09-19 07:15:52.836479 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:15:52.836485 | orchestrator | 2025-09-19 07:15:52.836493 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 07:15:52.836500 | orchestrator | 2025-09-19 07:15:52.836507 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 07:15:52.836514 | orchestrator | Friday 19 September 2025 07:15:28 +0000 (0:00:01.265) 0:01:00.905 ****** 2025-09-19 07:15:52.836522 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:15:52.836529 | orchestrator | 2025-09-19 07:15:52.836536 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 07:15:52.836543 | orchestrator | 2025-09-19 07:15:52.836551 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 07:15:52.836558 | orchestrator | Friday 19 September 2025 07:15:39 +0000 (0:00:11.146) 0:01:12.051 ****** 2025-09-19 07:15:52.836566 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:15:52.836572 | orchestrator | 2025-09-19 07:15:52.836588 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:15:52.836594 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 07:15:52.836599 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:52.836604 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:52.836614 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:52.836619 | orchestrator | 2025-09-19 07:15:52.836630 | orchestrator | 2025-09-19 07:15:52.836635 | orchestrator | 2025-09-19 07:15:52.836640 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:15:52.836644 | orchestrator | Friday 19 September 2025 07:15:50 +0000 (0:00:11.021) 0:01:23.072 ****** 2025-09-19 07:15:52.836649 | orchestrator | =============================================================================== 2025-09-19 07:15:52.836653 | orchestrator | Create admin user ------------------------------------------------------ 49.43s 2025-09-19 07:15:52.836658 | orchestrator | Restart ceph manager service ------------------------------------------- 23.43s 2025-09-19 07:15:52.836662 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.83s 2025-09-19 07:15:52.836668 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.38s 2025-09-19 07:15:52.836676 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.38s 2025-09-19 07:15:52.836683 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.17s 2025-09-19 07:15:52.836690 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.16s 2025-09-19 07:15:52.836698 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.11s 2025-09-19 07:15:52.836705 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2025-09-19 07:15:52.836712 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.88s 2025-09-19 07:15:52.836719 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.12s 2025-09-19 07:15:52.836750 | orchestrator | 2025-09-19 07:15:52 | INFO  | Task fb7f1410-87f9-4e98-9029-7099b0c958d3 is in state SUCCESS 2025-09-19 07:15:52.836758 | orchestrator | 2025-09-19 07:15:52 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:52.837282 | orchestrator | 2025-09-19 07:15:52 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:52.837916 | orchestrator | 2025-09-19 07:15:52 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:52.838900 | orchestrator | 2025-09-19 07:15:52 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:52.838929 | orchestrator | 2025-09-19 07:15:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:55.875705 | orchestrator | 2025-09-19 07:15:55 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:55.875829 | orchestrator | 2025-09-19 07:15:55 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:55.876259 | orchestrator | 2025-09-19 07:15:55 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:55.876909 | orchestrator | 2025-09-19 07:15:55 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:55.876932 | orchestrator | 2025-09-19 07:15:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:58.900478 | orchestrator | 2025-09-19 07:15:58 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:15:58.900576 | orchestrator | 2025-09-19 07:15:58 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:15:58.901050 | orchestrator | 2025-09-19 07:15:58 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:15:58.901553 | orchestrator | 2025-09-19 07:15:58 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:15:58.901588 | orchestrator | 2025-09-19 07:15:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:01.923455 | orchestrator | 2025-09-19 07:16:01 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:01.923549 | orchestrator | 2025-09-19 07:16:01 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:01.925653 | orchestrator | 2025-09-19 07:16:01 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:01.925936 | orchestrator | 2025-09-19 07:16:01 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:01.925975 | orchestrator | 2025-09-19 07:16:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:04.960792 | orchestrator | 2025-09-19 07:16:04 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:04.961766 | orchestrator | 2025-09-19 07:16:04 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:04.962577 | orchestrator | 2025-09-19 07:16:04 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:04.963762 | orchestrator | 2025-09-19 07:16:04 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:04.963807 | orchestrator | 2025-09-19 07:16:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:08.001275 | orchestrator | 2025-09-19 07:16:08 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:08.007462 | orchestrator | 2025-09-19 07:16:08 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:08.009494 | orchestrator | 2025-09-19 07:16:08 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:08.021481 | orchestrator | 2025-09-19 07:16:08 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:08.021551 | orchestrator | 2025-09-19 07:16:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:11.057496 | orchestrator | 2025-09-19 07:16:11 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:11.057622 | orchestrator | 2025-09-19 07:16:11 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:11.057649 | orchestrator | 2025-09-19 07:16:11 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:11.057670 | orchestrator | 2025-09-19 07:16:11 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:11.057690 | orchestrator | 2025-09-19 07:16:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:14.090926 | orchestrator | 2025-09-19 07:16:14 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:14.091533 | orchestrator | 2025-09-19 07:16:14 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:14.092202 | orchestrator | 2025-09-19 07:16:14 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:14.092706 | orchestrator | 2025-09-19 07:16:14 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:14.093121 | orchestrator | 2025-09-19 07:16:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:17.123047 | orchestrator | 2025-09-19 07:16:17 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:17.123151 | orchestrator | 2025-09-19 07:16:17 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:17.123592 | orchestrator | 2025-09-19 07:16:17 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:17.124207 | orchestrator | 2025-09-19 07:16:17 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:17.124230 | orchestrator | 2025-09-19 07:16:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:20.154459 | orchestrator | 2025-09-19 07:16:20 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:20.158189 | orchestrator | 2025-09-19 07:16:20 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:20.158815 | orchestrator | 2025-09-19 07:16:20 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:20.160037 | orchestrator | 2025-09-19 07:16:20 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:20.160048 | orchestrator | 2025-09-19 07:16:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:23.200661 | orchestrator | 2025-09-19 07:16:23 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:23.200998 | orchestrator | 2025-09-19 07:16:23 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:23.201818 | orchestrator | 2025-09-19 07:16:23 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:23.202341 | orchestrator | 2025-09-19 07:16:23 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:23.202369 | orchestrator | 2025-09-19 07:16:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:26.236064 | orchestrator | 2025-09-19 07:16:26 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:26.237154 | orchestrator | 2025-09-19 07:16:26 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:26.238275 | orchestrator | 2025-09-19 07:16:26 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:26.239141 | orchestrator | 2025-09-19 07:16:26 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:26.239165 | orchestrator | 2025-09-19 07:16:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:29.287350 | orchestrator | 2025-09-19 07:16:29 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:29.288223 | orchestrator | 2025-09-19 07:16:29 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:29.290255 | orchestrator | 2025-09-19 07:16:29 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:29.291285 | orchestrator | 2025-09-19 07:16:29 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:29.291310 | orchestrator | 2025-09-19 07:16:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:32.334294 | orchestrator | 2025-09-19 07:16:32 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:32.336299 | orchestrator | 2025-09-19 07:16:32 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:32.337509 | orchestrator | 2025-09-19 07:16:32 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:32.339043 | orchestrator | 2025-09-19 07:16:32 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:32.339266 | orchestrator | 2025-09-19 07:16:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:35.390186 | orchestrator | 2025-09-19 07:16:35 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:35.391624 | orchestrator | 2025-09-19 07:16:35 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:35.394577 | orchestrator | 2025-09-19 07:16:35 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:35.397492 | orchestrator | 2025-09-19 07:16:35 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:35.397521 | orchestrator | 2025-09-19 07:16:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:38.446373 | orchestrator | 2025-09-19 07:16:38 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:38.448038 | orchestrator | 2025-09-19 07:16:38 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:38.450277 | orchestrator | 2025-09-19 07:16:38 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:38.452137 | orchestrator | 2025-09-19 07:16:38 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:38.452419 | orchestrator | 2025-09-19 07:16:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:41.489940 | orchestrator | 2025-09-19 07:16:41 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:41.492219 | orchestrator | 2025-09-19 07:16:41 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:41.494629 | orchestrator | 2025-09-19 07:16:41 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:41.496363 | orchestrator | 2025-09-19 07:16:41 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:41.496751 | orchestrator | 2025-09-19 07:16:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:44.545006 | orchestrator | 2025-09-19 07:16:44 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:44.546543 | orchestrator | 2025-09-19 07:16:44 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:44.548591 | orchestrator | 2025-09-19 07:16:44 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:44.549957 | orchestrator | 2025-09-19 07:16:44 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:44.549983 | orchestrator | 2025-09-19 07:16:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:47.590660 | orchestrator | 2025-09-19 07:16:47 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:47.592613 | orchestrator | 2025-09-19 07:16:47 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:47.595358 | orchestrator | 2025-09-19 07:16:47 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:47.597692 | orchestrator | 2025-09-19 07:16:47 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:47.597869 | orchestrator | 2025-09-19 07:16:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:50.629697 | orchestrator | 2025-09-19 07:16:50 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:50.632372 | orchestrator | 2025-09-19 07:16:50 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:50.633154 | orchestrator | 2025-09-19 07:16:50 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:50.634261 | orchestrator | 2025-09-19 07:16:50 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:50.634345 | orchestrator | 2025-09-19 07:16:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:53.664475 | orchestrator | 2025-09-19 07:16:53 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:53.667468 | orchestrator | 2025-09-19 07:16:53 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:53.669101 | orchestrator | 2025-09-19 07:16:53 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:53.670885 | orchestrator | 2025-09-19 07:16:53 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:53.671187 | orchestrator | 2025-09-19 07:16:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:56.706533 | orchestrator | 2025-09-19 07:16:56 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:56.707445 | orchestrator | 2025-09-19 07:16:56 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:56.709369 | orchestrator | 2025-09-19 07:16:56 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:56.711872 | orchestrator | 2025-09-19 07:16:56 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:56.711903 | orchestrator | 2025-09-19 07:16:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:59.753537 | orchestrator | 2025-09-19 07:16:59 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:16:59.756772 | orchestrator | 2025-09-19 07:16:59 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:16:59.759564 | orchestrator | 2025-09-19 07:16:59 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:16:59.761266 | orchestrator | 2025-09-19 07:16:59 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:16:59.761294 | orchestrator | 2025-09-19 07:16:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:02.800991 | orchestrator | 2025-09-19 07:17:02 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:02.801430 | orchestrator | 2025-09-19 07:17:02 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:17:02.805828 | orchestrator | 2025-09-19 07:17:02 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:02.808109 | orchestrator | 2025-09-19 07:17:02 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:02.808503 | orchestrator | 2025-09-19 07:17:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:05.851093 | orchestrator | 2025-09-19 07:17:05 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:05.853147 | orchestrator | 2025-09-19 07:17:05 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:17:05.855289 | orchestrator | 2025-09-19 07:17:05 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:05.856729 | orchestrator | 2025-09-19 07:17:05 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:05.856956 | orchestrator | 2025-09-19 07:17:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:08.904810 | orchestrator | 2025-09-19 07:17:08 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:08.905566 | orchestrator | 2025-09-19 07:17:08 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:17:08.906297 | orchestrator | 2025-09-19 07:17:08 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:08.907155 | orchestrator | 2025-09-19 07:17:08 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:08.907218 | orchestrator | 2025-09-19 07:17:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:11.956936 | orchestrator | 2025-09-19 07:17:11 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:11.957047 | orchestrator | 2025-09-19 07:17:11 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:17:11.957062 | orchestrator | 2025-09-19 07:17:11 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:11.957073 | orchestrator | 2025-09-19 07:17:11 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:11.957083 | orchestrator | 2025-09-19 07:17:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:14.994928 | orchestrator | 2025-09-19 07:17:14 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:14.996396 | orchestrator | 2025-09-19 07:17:14 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:17:14.998905 | orchestrator | 2025-09-19 07:17:14 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:15.001115 | orchestrator | 2025-09-19 07:17:15 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:15.001462 | orchestrator | 2025-09-19 07:17:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:18.026014 | orchestrator | 2025-09-19 07:17:18 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:18.031651 | orchestrator | 2025-09-19 07:17:18 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:17:18.032886 | orchestrator | 2025-09-19 07:17:18 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:18.033468 | orchestrator | 2025-09-19 07:17:18 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:18.033492 | orchestrator | 2025-09-19 07:17:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:21.065118 | orchestrator | 2025-09-19 07:17:21 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:21.065591 | orchestrator | 2025-09-19 07:17:21 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:17:21.066751 | orchestrator | 2025-09-19 07:17:21 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:21.068084 | orchestrator | 2025-09-19 07:17:21 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:21.068114 | orchestrator | 2025-09-19 07:17:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:24.109260 | orchestrator | 2025-09-19 07:17:24 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:24.109530 | orchestrator | 2025-09-19 07:17:24 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:17:24.110168 | orchestrator | 2025-09-19 07:17:24 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:24.110832 | orchestrator | 2025-09-19 07:17:24 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:24.110871 | orchestrator | 2025-09-19 07:17:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:27.151233 | orchestrator | 2025-09-19 07:17:27 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:27.152750 | orchestrator | 2025-09-19 07:17:27 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:17:27.154536 | orchestrator | 2025-09-19 07:17:27 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:27.156108 | orchestrator | 2025-09-19 07:17:27 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:27.156524 | orchestrator | 2025-09-19 07:17:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:30.201572 | orchestrator | 2025-09-19 07:17:30 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:30.203152 | orchestrator | 2025-09-19 07:17:30 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state STARTED 2025-09-19 07:17:30.206112 | orchestrator | 2025-09-19 07:17:30 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:30.207958 | orchestrator | 2025-09-19 07:17:30 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:30.207997 | orchestrator | 2025-09-19 07:17:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:33.256488 | orchestrator | 2025-09-19 07:17:33 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state STARTED 2025-09-19 07:17:33.261929 | orchestrator | 2025-09-19 07:17:33 | INFO  | Task c1698383-c472-4df5-b858-36a2bd2e028a is in state SUCCESS 2025-09-19 07:17:33.263509 | orchestrator | 2025-09-19 07:17:33.263538 | orchestrator | 2025-09-19 07:17:33.263548 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:17:33.263558 | orchestrator | 2025-09-19 07:17:33.263568 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:17:33.263577 | orchestrator | Friday 19 September 2025 07:14:27 +0000 (0:00:00.257) 0:00:00.257 ****** 2025-09-19 07:17:33.263586 | orchestrator | ok: [testbed-manager] 2025-09-19 07:17:33.263596 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:17:33.263605 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:17:33.263614 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:17:33.263624 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:17:33.263633 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:17:33.263754 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:17:33.263772 | orchestrator | 2025-09-19 07:17:33.263784 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:17:33.263798 | orchestrator | Friday 19 September 2025 07:14:28 +0000 (0:00:00.771) 0:00:01.028 ****** 2025-09-19 07:17:33.263808 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-19 07:17:33.263817 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-19 07:17:33.263850 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-19 07:17:33.263896 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-19 07:17:33.263940 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-19 07:17:33.264035 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-19 07:17:33.264054 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-19 07:17:33.264068 | orchestrator | 2025-09-19 07:17:33.264082 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-19 07:17:33.264099 | orchestrator | 2025-09-19 07:17:33.264116 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 07:17:33.264133 | orchestrator | Friday 19 September 2025 07:14:28 +0000 (0:00:00.695) 0:00:01.724 ****** 2025-09-19 07:17:33.264188 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:17:33.264246 | orchestrator | 2025-09-19 07:17:33.264265 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-19 07:17:33.264281 | orchestrator | Friday 19 September 2025 07:14:30 +0000 (0:00:01.438) 0:00:03.163 ****** 2025-09-19 07:17:33.264298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264343 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:17:33.264355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264403 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264466 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264476 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264548 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:17:33.264559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264602 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264673 | orchestrator | 2025-09-19 07:17:33.264682 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 07:17:33.264692 | orchestrator | Friday 19 September 2025 07:14:34 +0000 (0:00:03.832) 0:00:06.995 ****** 2025-09-19 07:17:33.264721 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:17:33.264735 | orchestrator | 2025-09-19 07:17:33.264744 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-19 07:17:33.264753 | orchestrator | Friday 19 September 2025 07:14:35 +0000 (0:00:01.412) 0:00:08.408 ****** 2025-09-19 07:17:33.264763 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:17:33.264773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264836 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264845 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.264854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264929 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.264974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.264989 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.265004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.265014 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:17:33.265024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.265033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.265042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.265501 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.265529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.265539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.265549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.265558 | orchestrator | 2025-09-19 07:17:33.265567 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-19 07:17:33.265577 | orchestrator | Friday 19 September 2025 07:14:41 +0000 (0:00:05.997) 0:00:14.405 ****** 2025-09-19 07:17:33.265586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.265596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.265605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.265614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.265639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.265650 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.265659 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 07:17:33.265669 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.265679 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.265689 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 07:17:33.265719 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.265745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.265755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.265764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.265774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266175 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:17:33.266186 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.266196 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.266207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266238 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.266248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266296 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.266306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266337 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.266348 | orchestrator | 2025-09-19 07:17:33.266358 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-19 07:17:33.266368 | orchestrator | Friday 19 September 2025 07:14:42 +0000 (0:00:01.453) 0:00:15.859 ****** 2025-09-19 07:17:33.266378 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 07:17:33.266392 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266405 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266430 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 07:17:33.266440 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266458 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:17:33.266467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266511 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.266521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266571 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.266580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:17:33.266652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266672 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.266681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266690 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.266777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266818 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.266827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:17:33.266836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:17:33.266860 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.266869 | orchestrator | 2025-09-19 07:17:33.266878 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-19 07:17:33.266888 | orchestrator | Friday 19 September 2025 07:14:44 +0000 (0:00:01.760) 0:00:17.620 ****** 2025-09-19 07:17:33.266897 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:17:33.266906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.266924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.266934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.266943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.266952 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.266969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.266978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.266988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.266997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.267014 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.267024 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.267034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.267043 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.267057 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.267067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.267077 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:17:33.267096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.267106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.267115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.267130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.267139 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.267149 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.267158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.267175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.267185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.267194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.267208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.267218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.267227 | orchestrator | 2025-09-19 07:17:33.267236 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-19 07:17:33.267245 | orchestrator | Friday 19 September 2025 07:14:50 +0000 (0:00:05.666) 0:00:23.287 ****** 2025-09-19 07:17:33.267254 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:17:33.267263 | orchestrator | 2025-09-19 07:17:33.267272 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-19 07:17:33.267281 | orchestrator | Friday 19 September 2025 07:14:51 +0000 (0:00:00.891) 0:00:24.178 ****** 2025-09-19 07:17:33.267291 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095165, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267300 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095165, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267318 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095165, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267329 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095376, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.095794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267345 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095165, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267355 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095158, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267364 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095376, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.095794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267373 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095376, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.095794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267383 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095165, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.267399 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095165, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267409 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095165, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267423 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095376, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.095794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267432 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095376, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.095794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267442 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095179, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0327933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267451 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095158, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267460 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095158, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267478 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095158, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267488 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095376, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.095794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267502 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095158, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267512 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095153, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0227947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267521 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095179, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0327933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267530 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095166, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267539 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095179, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0327933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267557 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095179, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0327933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267572 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095179, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0327933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267581 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095158, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267591 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095376, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.095794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.267600 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095153, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0227947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267609 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095177, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0317934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267618 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095153, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0227947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267635 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095153, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0227947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267650 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095153, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0227947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267659 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095168, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0302746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267668 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095166, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267678 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095179, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0327933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267687 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095166, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267696 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095166, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267728 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095177, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0317934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267743 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095163, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267752 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095168, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0302746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267761 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095374, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0947938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267771 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095163, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267780 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095166, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267789 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095177, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0317934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267812 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095144, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0204027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267822 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095374, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0947938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267831 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095153, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0227947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267841 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095158, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.267850 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095395, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.106208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267859 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095144, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0204027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267869 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095168, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0302746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267890 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095177, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0317934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267900 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095395, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.106208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267910 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095177, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0317934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267919 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095166, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267928 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095181, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0937939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267937 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095181, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0937939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267947 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095157, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0244641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267967 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095163, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267982 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095168, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0302746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.267991 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095168, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0302746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268001 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095177, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0317934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268010 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095147, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0218055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268019 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095157, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0244641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268028 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095179, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0327933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268043 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095374, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0947938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268060 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095147, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0218055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268070 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095163, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268080 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095144, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0204027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268089 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095163, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268098 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095168, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0302746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268112 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095172, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0315037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268121 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095172, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0315037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268197 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095395, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.106208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268209 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095374, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0947938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268218 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095374, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0947938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268250 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095171, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0309582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268260 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095163, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268277 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095181, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0937939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268286 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095144, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0204027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268304 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095171, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0309582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268314 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095395, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.106208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095153, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0227947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268333 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095392, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.104794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268342 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095144, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0204027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268366 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.268375 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095157, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0244641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268385 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095374, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0947938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268403 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095395, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.106208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268413 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095392, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.104794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268422 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095181, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0937939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268431 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.268441 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095147, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0218055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268454 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095144, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0204027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268464 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095395, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.106208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268473 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095172, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0315037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268491 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095181, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0937939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268501 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095157, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0244641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268510 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095166, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0287933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268519 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095181, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0937939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268533 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095147, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0218055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268543 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095157, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0244641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268552 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095171, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0309582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268569 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095147, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0218055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268579 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095157, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0244641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268588 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095392, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.104794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268597 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.268606 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095172, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0315037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268621 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095172, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0315037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268630 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095147, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0218055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268639 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095171, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0309582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268659 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095171, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0309582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268669 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095177, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0317934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268678 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095392, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.104794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268692 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.268716 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095172, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0315037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268725 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095392, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.104794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268735 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.268744 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095171, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0309582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268753 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095168, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0302746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268771 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095392, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.104794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:17:33.268780 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.268789 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095163, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.026416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268799 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095374, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0947938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268813 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095144, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0204027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268822 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095395, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.106208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268831 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095181, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0937939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268840 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095157, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0244641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268858 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095147, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0218055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268868 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095172, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0315037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268882 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095171, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.0309582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268891 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095392, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263619.104794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:17:33.268900 | orchestrator | 2025-09-19 07:17:33.268909 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-19 07:17:33.268918 | orchestrator | Friday 19 September 2025 07:15:15 +0000 (0:00:24.472) 0:00:48.651 ****** 2025-09-19 07:17:33.268927 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:17:33.268936 | orchestrator | 2025-09-19 07:17:33.268945 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-19 07:17:33.268954 | orchestrator | Friday 19 September 2025 07:15:16 +0000 (0:00:00.630) 0:00:49.281 ****** 2025-09-19 07:17:33.268963 | orchestrator | [WARNING]: Skipped 2025-09-19 07:17:33.268972 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.268981 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-19 07:17:33.268990 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.268999 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-19 07:17:33.269008 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:17:33.269017 | orchestrator | [WARNING]: Skipped 2025-09-19 07:17:33.269026 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269035 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-19 07:17:33.269044 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269053 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-19 07:17:33.269061 | orchestrator | [WARNING]: Skipped 2025-09-19 07:17:33.269070 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269079 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-19 07:17:33.269088 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269097 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-19 07:17:33.269106 | orchestrator | [WARNING]: Skipped 2025-09-19 07:17:33.269115 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269123 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-19 07:17:33.269132 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269141 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-19 07:17:33.269150 | orchestrator | [WARNING]: Skipped 2025-09-19 07:17:33.269162 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269176 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-19 07:17:33.269185 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269198 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-19 07:17:33.269207 | orchestrator | [WARNING]: Skipped 2025-09-19 07:17:33.269215 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269224 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-19 07:17:33.269233 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269242 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-19 07:17:33.269251 | orchestrator | [WARNING]: Skipped 2025-09-19 07:17:33.269259 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269268 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-19 07:17:33.269277 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:17:33.269286 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-19 07:17:33.269294 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:17:33.269303 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 07:17:33.269312 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 07:17:33.269321 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 07:17:33.269329 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 07:17:33.269338 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 07:17:33.269347 | orchestrator | 2025-09-19 07:17:33.269356 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-19 07:17:33.269365 | orchestrator | Friday 19 September 2025 07:15:18 +0000 (0:00:01.972) 0:00:51.253 ****** 2025-09-19 07:17:33.269374 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:17:33.269383 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.269392 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:17:33.269400 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.269409 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:17:33.269418 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.269427 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:17:33.269436 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.269445 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:17:33.269454 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.269463 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:17:33.269472 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.269480 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-19 07:17:33.269489 | orchestrator | 2025-09-19 07:17:33.269498 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-19 07:17:33.269507 | orchestrator | Friday 19 September 2025 07:15:34 +0000 (0:00:15.848) 0:01:07.102 ****** 2025-09-19 07:17:33.269516 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:17:33.269524 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:17:33.269533 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.269542 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.269551 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:17:33.269560 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.269569 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:17:33.269583 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.269592 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:17:33.269600 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.269609 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:17:33.269618 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.269627 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-19 07:17:33.269636 | orchestrator | 2025-09-19 07:17:33.269645 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-19 07:17:33.269654 | orchestrator | Friday 19 September 2025 07:15:37 +0000 (0:00:03.574) 0:01:10.676 ****** 2025-09-19 07:17:33.269663 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:17:33.269672 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:17:33.269681 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.269689 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:17:33.269718 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.269732 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.269745 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:17:33.269755 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.269764 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-19 07:17:33.269773 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:17:33.269782 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.269791 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:17:33.269800 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.269809 | orchestrator | 2025-09-19 07:17:33.269818 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-19 07:17:33.269827 | orchestrator | Friday 19 September 2025 07:15:40 +0000 (0:00:02.342) 0:01:13.018 ****** 2025-09-19 07:17:33.269835 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:17:33.269844 | orchestrator | 2025-09-19 07:17:33.269853 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-19 07:17:33.269862 | orchestrator | Friday 19 September 2025 07:15:40 +0000 (0:00:00.697) 0:01:13.716 ****** 2025-09-19 07:17:33.269871 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:17:33.269880 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.269889 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.269898 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.269906 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.269915 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.269924 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.269933 | orchestrator | 2025-09-19 07:17:33.269942 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-19 07:17:33.269951 | orchestrator | Friday 19 September 2025 07:15:41 +0000 (0:00:00.797) 0:01:14.513 ****** 2025-09-19 07:17:33.269960 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:17:33.269968 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.269977 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:33.269986 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.269995 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.270009 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:17:33.270039 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:17:33.270050 | orchestrator | 2025-09-19 07:17:33.270059 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-19 07:17:33.270068 | orchestrator | Friday 19 September 2025 07:15:45 +0000 (0:00:03.527) 0:01:18.040 ****** 2025-09-19 07:17:33.270077 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:17:33.270086 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:17:33.270095 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:17:33.270104 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.270113 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:17:33.270122 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.270131 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:17:33.270140 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.270149 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:17:33.270158 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.270167 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:17:33.270175 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.270184 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:17:33.270193 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.270202 | orchestrator | 2025-09-19 07:17:33.270211 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-19 07:17:33.270220 | orchestrator | Friday 19 September 2025 07:15:47 +0000 (0:00:02.135) 0:01:20.175 ****** 2025-09-19 07:17:33.270229 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:17:33.270238 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.270247 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:17:33.270256 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.270265 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:17:33.270274 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.270283 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:17:33.270292 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.270301 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:17:33.270310 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.270319 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:17:33.270328 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.270341 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-19 07:17:33.270350 | orchestrator | 2025-09-19 07:17:33.270364 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-19 07:17:33.270373 | orchestrator | Friday 19 September 2025 07:15:48 +0000 (0:00:01.686) 0:01:21.862 ****** 2025-09-19 07:17:33.270382 | orchestrator | [WARNING]: Skipped 2025-09-19 07:17:33.270391 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-19 07:17:33.270400 | orchestrator | due to this access issue: 2025-09-19 07:17:33.270409 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-19 07:17:33.270418 | orchestrator | not a directory 2025-09-19 07:17:33.270432 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:17:33.270441 | orchestrator | 2025-09-19 07:17:33.270450 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-19 07:17:33.270459 | orchestrator | Friday 19 September 2025 07:15:50 +0000 (0:00:01.284) 0:01:23.147 ****** 2025-09-19 07:17:33.270467 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:17:33.270476 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.270485 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.270494 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.270503 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.270512 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.270520 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.270529 | orchestrator | 2025-09-19 07:17:33.270538 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-19 07:17:33.270547 | orchestrator | Friday 19 September 2025 07:15:51 +0000 (0:00:01.122) 0:01:24.269 ****** 2025-09-19 07:17:33.270556 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:17:33.270565 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:33.270574 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:33.270583 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:33.270591 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:17:33.270600 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:17:33.270609 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:17:33.270618 | orchestrator | 2025-09-19 07:17:33.270627 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-19 07:17:33.270636 | orchestrator | Friday 19 September 2025 07:15:52 +0000 (0:00:00.841) 0:01:25.110 ****** 2025-09-19 07:17:33.270646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.270656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.270665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.270675 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:17:33.270733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.270746 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.270755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.270765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.270774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.270784 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.270793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:17:33.270802 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.270827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.270836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.270845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.270853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.270862 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.270870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.270883 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:17:33.270900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.270909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.270918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.270926 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.270935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.270943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.270956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:17:33.270972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.270981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.270989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:17:33.270998 | orchestrator | 2025-09-19 07:17:33.271006 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-19 07:17:33.271014 | orchestrator | Friday 19 September 2025 07:15:57 +0000 (0:00:05.042) 0:01:30.153 ****** 2025-09-19 07:17:33.271022 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 07:17:33.271030 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:17:33.271038 | orchestrator | 2025-09-19 07:17:33.271047 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:17:33.271055 | orchestrator | Friday 19 September 2025 07:15:59 +0000 (0:00:02.155) 0:01:32.308 ****** 2025-09-19 07:17:33.271062 | orchestrator | 2025-09-19 07:17:33.271071 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:17:33.271078 | orchestrator | Friday 19 September 2025 07:15:59 +0000 (0:00:00.115) 0:01:32.424 ****** 2025-09-19 07:17:33.271086 | orchestrator | 2025-09-19 07:17:33.271095 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:17:33.271103 | orchestrator | Friday 19 September 2025 07:15:59 +0000 (0:00:00.083) 0:01:32.508 ****** 2025-09-19 07:17:33.271111 | orchestrator | 2025-09-19 07:17:33.271119 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:17:33.271127 | orchestrator | Friday 19 September 2025 07:15:59 +0000 (0:00:00.205) 0:01:32.713 ****** 2025-09-19 07:17:33.271135 | orchestrator | 2025-09-19 07:17:33.271143 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:17:33.271151 | orchestrator | Friday 19 September 2025 07:15:59 +0000 (0:00:00.100) 0:01:32.814 ****** 2025-09-19 07:17:33.271159 | orchestrator | 2025-09-19 07:17:33.271175 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:17:33.271183 | orchestrator | Friday 19 September 2025 07:15:59 +0000 (0:00:00.103) 0:01:32.917 ****** 2025-09-19 07:17:33.271192 | orchestrator | 2025-09-19 07:17:33.271200 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:17:33.271207 | orchestrator | Friday 19 September 2025 07:16:00 +0000 (0:00:00.105) 0:01:33.022 ****** 2025-09-19 07:17:33.271215 | orchestrator | 2025-09-19 07:17:33.271223 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-19 07:17:33.271231 | orchestrator | Friday 19 September 2025 07:16:00 +0000 (0:00:00.128) 0:01:33.151 ****** 2025-09-19 07:17:33.271239 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:33.271247 | orchestrator | 2025-09-19 07:17:33.271255 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-19 07:17:33.271263 | orchestrator | Friday 19 September 2025 07:16:13 +0000 (0:00:13.035) 0:01:46.186 ****** 2025-09-19 07:17:33.271271 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:33.271279 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:17:33.271287 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:17:33.271295 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:17:33.271303 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:33.271311 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:17:33.271319 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:17:33.271327 | orchestrator | 2025-09-19 07:17:33.271335 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-19 07:17:33.271344 | orchestrator | Friday 19 September 2025 07:16:29 +0000 (0:00:16.347) 0:02:02.533 ****** 2025-09-19 07:17:33.271351 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:17:33.271359 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:17:33.271367 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:33.271375 | orchestrator | 2025-09-19 07:17:33.271384 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-19 07:17:33.271392 | orchestrator | Friday 19 September 2025 07:16:39 +0000 (0:00:09.958) 0:02:12.492 ****** 2025-09-19 07:17:33.271400 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:17:33.271408 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:33.271416 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:17:33.271424 | orchestrator | 2025-09-19 07:17:33.271432 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-19 07:17:33.271440 | orchestrator | Friday 19 September 2025 07:16:49 +0000 (0:00:09.708) 0:02:22.200 ****** 2025-09-19 07:17:33.271448 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:17:33.271459 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:17:33.271471 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:33.271479 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:17:33.271487 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:17:33.271495 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:17:33.271503 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:33.271511 | orchestrator | 2025-09-19 07:17:33.271519 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-19 07:17:33.271527 | orchestrator | Friday 19 September 2025 07:17:03 +0000 (0:00:13.907) 0:02:36.107 ****** 2025-09-19 07:17:33.271536 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:33.271544 | orchestrator | 2025-09-19 07:17:33.271552 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-19 07:17:33.271560 | orchestrator | Friday 19 September 2025 07:17:09 +0000 (0:00:06.567) 0:02:42.675 ****** 2025-09-19 07:17:33.271568 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:33.271576 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:17:33.271584 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:17:33.271592 | orchestrator | 2025-09-19 07:17:33.271600 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-19 07:17:33.271608 | orchestrator | Friday 19 September 2025 07:17:16 +0000 (0:00:06.334) 0:02:49.009 ****** 2025-09-19 07:17:33.271620 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:33.271628 | orchestrator | 2025-09-19 07:17:33.271636 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-19 07:17:33.271644 | orchestrator | Friday 19 September 2025 07:17:20 +0000 (0:00:04.527) 0:02:53.537 ****** 2025-09-19 07:17:33.271652 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:17:33.271660 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:17:33.271668 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:17:33.271676 | orchestrator | 2025-09-19 07:17:33.271684 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:17:33.271692 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 07:17:33.271713 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:17:33.271722 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:17:33.271730 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:17:33.271738 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 07:17:33.271746 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 07:17:33.271755 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 07:17:33.271762 | orchestrator | 2025-09-19 07:17:33.271777 | orchestrator | 2025-09-19 07:17:33.271791 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:17:33.271805 | orchestrator | Friday 19 September 2025 07:17:32 +0000 (0:00:11.878) 0:03:05.416 ****** 2025-09-19 07:17:33.271820 | orchestrator | =============================================================================== 2025-09-19 07:17:33.271834 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.47s 2025-09-19 07:17:33.271848 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.35s 2025-09-19 07:17:33.271857 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.85s 2025-09-19 07:17:33.271865 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.91s 2025-09-19 07:17:33.271873 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.04s 2025-09-19 07:17:33.271881 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.88s 2025-09-19 07:17:33.271889 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.96s 2025-09-19 07:17:33.271897 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.71s 2025-09-19 07:17:33.271905 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.57s 2025-09-19 07:17:33.271913 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.33s 2025-09-19 07:17:33.271921 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.00s 2025-09-19 07:17:33.271929 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.67s 2025-09-19 07:17:33.271937 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.04s 2025-09-19 07:17:33.271945 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.53s 2025-09-19 07:17:33.271953 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.83s 2025-09-19 07:17:33.271961 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.57s 2025-09-19 07:17:33.271975 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.53s 2025-09-19 07:17:33.271983 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.34s 2025-09-19 07:17:33.272088 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 2.16s 2025-09-19 07:17:33.272106 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.14s 2025-09-19 07:17:33.272114 | orchestrator | 2025-09-19 07:17:33 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:33.272123 | orchestrator | 2025-09-19 07:17:33 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:33.272131 | orchestrator | 2025-09-19 07:17:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:36.322255 | orchestrator | 2025-09-19 07:17:36.322336 | orchestrator | 2025-09-19 07:17:36.322351 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:17:36.322363 | orchestrator | 2025-09-19 07:17:36.322374 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:17:36.322385 | orchestrator | Friday 19 September 2025 07:14:34 +0000 (0:00:00.263) 0:00:00.263 ****** 2025-09-19 07:17:36.322396 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:17:36.322408 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:17:36.322419 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:17:36.322430 | orchestrator | 2025-09-19 07:17:36.322441 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:17:36.322452 | orchestrator | Friday 19 September 2025 07:14:34 +0000 (0:00:00.284) 0:00:00.547 ****** 2025-09-19 07:17:36.322463 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-19 07:17:36.322474 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-19 07:17:36.322485 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-19 07:17:36.322496 | orchestrator | 2025-09-19 07:17:36.322507 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-19 07:17:36.322518 | orchestrator | 2025-09-19 07:17:36.322529 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 07:17:36.322541 | orchestrator | Friday 19 September 2025 07:14:35 +0000 (0:00:00.520) 0:00:01.068 ****** 2025-09-19 07:17:36.322552 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:17:36.322564 | orchestrator | 2025-09-19 07:17:36.322575 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-19 07:17:36.322586 | orchestrator | Friday 19 September 2025 07:14:35 +0000 (0:00:00.759) 0:00:01.828 ****** 2025-09-19 07:17:36.322597 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-19 07:17:36.322853 | orchestrator | 2025-09-19 07:17:36.322872 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-19 07:17:36.322883 | orchestrator | Friday 19 September 2025 07:14:46 +0000 (0:00:10.656) 0:00:12.484 ****** 2025-09-19 07:17:36.322894 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-19 07:17:36.322906 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-19 07:17:36.322917 | orchestrator | 2025-09-19 07:17:36.322928 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-19 07:17:36.322939 | orchestrator | Friday 19 September 2025 07:14:52 +0000 (0:00:06.230) 0:00:18.714 ****** 2025-09-19 07:17:36.322950 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-19 07:17:36.322961 | orchestrator | 2025-09-19 07:17:36.322972 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-19 07:17:36.322983 | orchestrator | Friday 19 September 2025 07:14:55 +0000 (0:00:03.123) 0:00:21.839 ****** 2025-09-19 07:17:36.322995 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:17:36.323025 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-19 07:17:36.323037 | orchestrator | 2025-09-19 07:17:36.323048 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-19 07:17:36.323059 | orchestrator | Friday 19 September 2025 07:14:59 +0000 (0:00:03.587) 0:00:25.426 ****** 2025-09-19 07:17:36.323070 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:17:36.323081 | orchestrator | 2025-09-19 07:17:36.323093 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-19 07:17:36.323104 | orchestrator | Friday 19 September 2025 07:15:03 +0000 (0:00:03.667) 0:00:29.094 ****** 2025-09-19 07:17:36.323115 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-19 07:17:36.323126 | orchestrator | 2025-09-19 07:17:36.323137 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-19 07:17:36.323148 | orchestrator | Friday 19 September 2025 07:15:07 +0000 (0:00:04.005) 0:00:33.099 ****** 2025-09-19 07:17:36.323179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.323197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.323336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.323360 | orchestrator | 2025-09-19 07:17:36.323372 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 07:17:36.323383 | orchestrator | Friday 19 September 2025 07:15:10 +0000 (0:00:03.483) 0:00:36.583 ****** 2025-09-19 07:17:36.323405 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:17:36.323417 | orchestrator | 2025-09-19 07:17:36.323428 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-19 07:17:36.323439 | orchestrator | Friday 19 September 2025 07:15:11 +0000 (0:00:00.672) 0:00:37.256 ****** 2025-09-19 07:17:36.323450 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:17:36.323462 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:36.323472 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:17:36.323483 | orchestrator | 2025-09-19 07:17:36.323496 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-19 07:17:36.323508 | orchestrator | Friday 19 September 2025 07:15:15 +0000 (0:00:04.015) 0:00:41.272 ****** 2025-09-19 07:17:36.323521 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:17:36.323534 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:17:36.323546 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:17:36.323559 | orchestrator | 2025-09-19 07:17:36.323572 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-19 07:17:36.323584 | orchestrator | Friday 19 September 2025 07:15:16 +0000 (0:00:01.462) 0:00:42.734 ****** 2025-09-19 07:17:36.323597 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:17:36.323618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:17:36.323631 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:17:36.323644 | orchestrator | 2025-09-19 07:17:36.323656 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-19 07:17:36.323669 | orchestrator | Friday 19 September 2025 07:15:17 +0000 (0:00:01.182) 0:00:43.916 ****** 2025-09-19 07:17:36.323681 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:17:36.323694 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:17:36.323728 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:17:36.323741 | orchestrator | 2025-09-19 07:17:36.323754 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-19 07:17:36.323767 | orchestrator | Friday 19 September 2025 07:15:18 +0000 (0:00:00.961) 0:00:44.878 ****** 2025-09-19 07:17:36.323779 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.323791 | orchestrator | 2025-09-19 07:17:36.323804 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-19 07:17:36.323816 | orchestrator | Friday 19 September 2025 07:15:19 +0000 (0:00:00.217) 0:00:45.096 ****** 2025-09-19 07:17:36.323829 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.323842 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.323853 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.323864 | orchestrator | 2025-09-19 07:17:36.323875 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 07:17:36.323886 | orchestrator | Friday 19 September 2025 07:15:19 +0000 (0:00:00.518) 0:00:45.614 ****** 2025-09-19 07:17:36.323897 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:17:36.323908 | orchestrator | 2025-09-19 07:17:36.323919 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-19 07:17:36.323930 | orchestrator | Friday 19 September 2025 07:15:20 +0000 (0:00:00.481) 0:00:46.095 ****** 2025-09-19 07:17:36.323953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.323968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.323988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.324000 | orchestrator | 2025-09-19 07:17:36.324015 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-19 07:17:36.324027 | orchestrator | Friday 19 September 2025 07:15:24 +0000 (0:00:04.833) 0:00:50.929 ****** 2025-09-19 07:17:36.324048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:17:36.324070 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.324082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:17:36.324094 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.324118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:17:36.324136 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.324148 | orchestrator | 2025-09-19 07:17:36.324159 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-19 07:17:36.324170 | orchestrator | Friday 19 September 2025 07:15:27 +0000 (0:00:02.745) 0:00:53.674 ****** 2025-09-19 07:17:36.324182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:17:36.324193 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.324216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:17:36.324234 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.324246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:17:36.324258 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.324269 | orchestrator | 2025-09-19 07:17:36.324280 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-19 07:17:36.324291 | orchestrator | Friday 19 September 2025 07:15:30 +0000 (0:00:02.764) 0:00:56.439 ****** 2025-09-19 07:17:36.324302 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.324313 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.324324 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.324335 | orchestrator | 2025-09-19 07:17:36.324346 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-19 07:17:36.324357 | orchestrator | Friday 19 September 2025 07:15:34 +0000 (0:00:04.559) 0:01:00.999 ****** 2025-09-19 07:17:36.324379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.324399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.324416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.324437 | orchestrator | 2025-09-19 07:17:36.324448 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-19 07:17:36.324460 | orchestrator | Friday 19 September 2025 07:15:40 +0000 (0:00:05.638) 0:01:06.637 ****** 2025-09-19 07:17:36.324471 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:17:36.324482 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:36.324493 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:17:36.324504 | orchestrator | 2025-09-19 07:17:36.324515 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-19 07:17:36.324532 | orchestrator | Friday 19 September 2025 07:15:48 +0000 (0:00:07.926) 0:01:14.564 ****** 2025-09-19 07:17:36.324543 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.324554 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.324565 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.324576 | orchestrator | 2025-09-19 07:17:36.324587 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-19 07:17:36.324599 | orchestrator | Friday 19 September 2025 07:15:51 +0000 (0:00:03.339) 0:01:17.903 ****** 2025-09-19 07:17:36.324610 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.324621 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.324632 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.324643 | orchestrator | 2025-09-19 07:17:36.324654 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-19 07:17:36.324665 | orchestrator | Friday 19 September 2025 07:15:56 +0000 (0:00:05.077) 0:01:22.981 ****** 2025-09-19 07:17:36.324676 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.324687 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.324752 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.324766 | orchestrator | 2025-09-19 07:17:36.324777 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-19 07:17:36.324788 | orchestrator | Friday 19 September 2025 07:16:01 +0000 (0:00:04.138) 0:01:27.119 ****** 2025-09-19 07:17:36.324799 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.324810 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.324821 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.324832 | orchestrator | 2025-09-19 07:17:36.324843 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-19 07:17:36.324854 | orchestrator | Friday 19 September 2025 07:16:05 +0000 (0:00:04.014) 0:01:31.133 ****** 2025-09-19 07:17:36.324865 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.324876 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.324887 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.324898 | orchestrator | 2025-09-19 07:17:36.324909 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-19 07:17:36.324920 | orchestrator | Friday 19 September 2025 07:16:05 +0000 (0:00:00.350) 0:01:31.484 ****** 2025-09-19 07:17:36.324930 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 07:17:36.324939 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.324949 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 07:17:36.324959 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.324969 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 07:17:36.324979 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.324988 | orchestrator | 2025-09-19 07:17:36.324998 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-19 07:17:36.325008 | orchestrator | Friday 19 September 2025 07:16:15 +0000 (0:00:09.849) 0:01:41.333 ****** 2025-09-19 07:17:36.325023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.325050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.325062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:17:36.325079 | orchestrator | 2025-09-19 07:17:36.325089 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 07:17:36.325099 | orchestrator | Friday 19 September 2025 07:16:22 +0000 (0:00:07.110) 0:01:48.444 ****** 2025-09-19 07:17:36.325108 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:17:36.325118 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:17:36.325128 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:17:36.325138 | orchestrator | 2025-09-19 07:17:36.325148 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-19 07:17:36.325158 | orchestrator | Friday 19 September 2025 07:16:22 +0000 (0:00:00.438) 0:01:48.882 ****** 2025-09-19 07:17:36.325168 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:36.325177 | orchestrator | 2025-09-19 07:17:36.325191 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-19 07:17:36.325201 | orchestrator | Friday 19 September 2025 07:16:25 +0000 (0:00:02.227) 0:01:51.109 ****** 2025-09-19 07:17:36.325211 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:36.325220 | orchestrator | 2025-09-19 07:17:36.325230 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-19 07:17:36.325240 | orchestrator | Friday 19 September 2025 07:16:27 +0000 (0:00:02.188) 0:01:53.298 ****** 2025-09-19 07:17:36.325250 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:36.325260 | orchestrator | 2025-09-19 07:17:36.325270 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-19 07:17:36.325284 | orchestrator | Friday 19 September 2025 07:16:29 +0000 (0:00:02.033) 0:01:55.331 ****** 2025-09-19 07:17:36.325294 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:36.325304 | orchestrator | 2025-09-19 07:17:36.325314 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-19 07:17:36.325324 | orchestrator | Friday 19 September 2025 07:16:59 +0000 (0:00:30.089) 0:02:25.420 ****** 2025-09-19 07:17:36.325333 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:36.325343 | orchestrator | 2025-09-19 07:17:36.325353 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 07:17:36.325362 | orchestrator | Friday 19 September 2025 07:17:01 +0000 (0:00:02.066) 0:02:27.487 ****** 2025-09-19 07:17:36.325372 | orchestrator | 2025-09-19 07:17:36.325382 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 07:17:36.325392 | orchestrator | Friday 19 September 2025 07:17:01 +0000 (0:00:00.239) 0:02:27.727 ****** 2025-09-19 07:17:36.325401 | orchestrator | 2025-09-19 07:17:36.325411 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 07:17:36.325421 | orchestrator | Friday 19 September 2025 07:17:01 +0000 (0:00:00.064) 0:02:27.791 ****** 2025-09-19 07:17:36.325430 | orchestrator | 2025-09-19 07:17:36.325440 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-19 07:17:36.325450 | orchestrator | Friday 19 September 2025 07:17:01 +0000 (0:00:00.067) 0:02:27.859 ****** 2025-09-19 07:17:36.325465 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:36.325474 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:17:36.325484 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:17:36.325494 | orchestrator | 2025-09-19 07:17:36.325504 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:17:36.325514 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 07:17:36.325525 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:17:36.325535 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:17:36.325544 | orchestrator | 2025-09-19 07:17:36.325554 | orchestrator | 2025-09-19 07:17:36.325564 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:17:36.325574 | orchestrator | Friday 19 September 2025 07:17:35 +0000 (0:00:33.929) 0:03:01.788 ****** 2025-09-19 07:17:36.325583 | orchestrator | =============================================================================== 2025-09-19 07:17:36.325593 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.93s 2025-09-19 07:17:36.325603 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.09s 2025-09-19 07:17:36.325613 | orchestrator | service-ks-register : glance | Creating services ----------------------- 10.66s 2025-09-19 07:17:36.325622 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 9.85s 2025-09-19 07:17:36.325632 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.93s 2025-09-19 07:17:36.325642 | orchestrator | glance : Check glance containers ---------------------------------------- 7.11s 2025-09-19 07:17:36.325651 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.23s 2025-09-19 07:17:36.325661 | orchestrator | glance : Copying over config.json files for services -------------------- 5.64s 2025-09-19 07:17:36.325671 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.08s 2025-09-19 07:17:36.325681 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.83s 2025-09-19 07:17:36.325690 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.56s 2025-09-19 07:17:36.325714 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.14s 2025-09-19 07:17:36.325724 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.02s 2025-09-19 07:17:36.325734 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.01s 2025-09-19 07:17:36.325744 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.01s 2025-09-19 07:17:36.325753 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.67s 2025-09-19 07:17:36.325763 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.59s 2025-09-19 07:17:36.325773 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.48s 2025-09-19 07:17:36.325783 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.34s 2025-09-19 07:17:36.325792 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.12s 2025-09-19 07:17:36.325802 | orchestrator | 2025-09-19 07:17:36 | INFO  | Task cf7c7930-f409-485d-a7db-a227ea58ab5c is in state SUCCESS 2025-09-19 07:17:36.325816 | orchestrator | 2025-09-19 07:17:36 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:36.325826 | orchestrator | 2025-09-19 07:17:36 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:17:36.325836 | orchestrator | 2025-09-19 07:17:36 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:36.325851 | orchestrator | 2025-09-19 07:17:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:39.365403 | orchestrator | 2025-09-19 07:17:39 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:39.368332 | orchestrator | 2025-09-19 07:17:39 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:17:39.370144 | orchestrator | 2025-09-19 07:17:39 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:17:39.371724 | orchestrator | 2025-09-19 07:17:39 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:39.371932 | orchestrator | 2025-09-19 07:17:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:42.407469 | orchestrator | 2025-09-19 07:17:42 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:42.409244 | orchestrator | 2025-09-19 07:17:42 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:17:42.414053 | orchestrator | 2025-09-19 07:17:42 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:17:42.414094 | orchestrator | 2025-09-19 07:17:42 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:42.414114 | orchestrator | 2025-09-19 07:17:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:45.452341 | orchestrator | 2025-09-19 07:17:45 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:45.453757 | orchestrator | 2025-09-19 07:17:45 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:17:45.455948 | orchestrator | 2025-09-19 07:17:45 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:17:45.458232 | orchestrator | 2025-09-19 07:17:45 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:45.458529 | orchestrator | 2025-09-19 07:17:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:48.490897 | orchestrator | 2025-09-19 07:17:48 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:48.494178 | orchestrator | 2025-09-19 07:17:48 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:17:48.496465 | orchestrator | 2025-09-19 07:17:48 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:17:48.499120 | orchestrator | 2025-09-19 07:17:48 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:48.499170 | orchestrator | 2025-09-19 07:17:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:51.544964 | orchestrator | 2025-09-19 07:17:51 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:51.546725 | orchestrator | 2025-09-19 07:17:51 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:17:51.550011 | orchestrator | 2025-09-19 07:17:51 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:17:51.551535 | orchestrator | 2025-09-19 07:17:51 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:51.551561 | orchestrator | 2025-09-19 07:17:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:54.596325 | orchestrator | 2025-09-19 07:17:54 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:54.598011 | orchestrator | 2025-09-19 07:17:54 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:17:54.600359 | orchestrator | 2025-09-19 07:17:54 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:17:54.601945 | orchestrator | 2025-09-19 07:17:54 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:54.602175 | orchestrator | 2025-09-19 07:17:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:57.646371 | orchestrator | 2025-09-19 07:17:57 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:17:57.648274 | orchestrator | 2025-09-19 07:17:57 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:17:57.650254 | orchestrator | 2025-09-19 07:17:57 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:17:57.652107 | orchestrator | 2025-09-19 07:17:57 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:17:57.652245 | orchestrator | 2025-09-19 07:17:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:00.690326 | orchestrator | 2025-09-19 07:18:00 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:00.693021 | orchestrator | 2025-09-19 07:18:00 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:00.694400 | orchestrator | 2025-09-19 07:18:00 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:00.695742 | orchestrator | 2025-09-19 07:18:00 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:00.695764 | orchestrator | 2025-09-19 07:18:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:03.737402 | orchestrator | 2025-09-19 07:18:03 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:03.739603 | orchestrator | 2025-09-19 07:18:03 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:03.741615 | orchestrator | 2025-09-19 07:18:03 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:03.744245 | orchestrator | 2025-09-19 07:18:03 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:03.744601 | orchestrator | 2025-09-19 07:18:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:06.785880 | orchestrator | 2025-09-19 07:18:06 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:06.788661 | orchestrator | 2025-09-19 07:18:06 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:06.789799 | orchestrator | 2025-09-19 07:18:06 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:06.791155 | orchestrator | 2025-09-19 07:18:06 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:06.791182 | orchestrator | 2025-09-19 07:18:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:09.835662 | orchestrator | 2025-09-19 07:18:09 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:09.837427 | orchestrator | 2025-09-19 07:18:09 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:09.839311 | orchestrator | 2025-09-19 07:18:09 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:09.842457 | orchestrator | 2025-09-19 07:18:09 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:09.842766 | orchestrator | 2025-09-19 07:18:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:12.884890 | orchestrator | 2025-09-19 07:18:12 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:12.885963 | orchestrator | 2025-09-19 07:18:12 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:12.887016 | orchestrator | 2025-09-19 07:18:12 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:12.888496 | orchestrator | 2025-09-19 07:18:12 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:12.888544 | orchestrator | 2025-09-19 07:18:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:15.920909 | orchestrator | 2025-09-19 07:18:15 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:15.921239 | orchestrator | 2025-09-19 07:18:15 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:15.922638 | orchestrator | 2025-09-19 07:18:15 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:15.924016 | orchestrator | 2025-09-19 07:18:15 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:15.924038 | orchestrator | 2025-09-19 07:18:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:18.952438 | orchestrator | 2025-09-19 07:18:18 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:18.952563 | orchestrator | 2025-09-19 07:18:18 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:18.954464 | orchestrator | 2025-09-19 07:18:18 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:18.955085 | orchestrator | 2025-09-19 07:18:18 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:18.955124 | orchestrator | 2025-09-19 07:18:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:21.985954 | orchestrator | 2025-09-19 07:18:21 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:21.986389 | orchestrator | 2025-09-19 07:18:21 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:21.988370 | orchestrator | 2025-09-19 07:18:21 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:21.990374 | orchestrator | 2025-09-19 07:18:21 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:21.990402 | orchestrator | 2025-09-19 07:18:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:25.028166 | orchestrator | 2025-09-19 07:18:25 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:25.028606 | orchestrator | 2025-09-19 07:18:25 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:25.030530 | orchestrator | 2025-09-19 07:18:25 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:25.031509 | orchestrator | 2025-09-19 07:18:25 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:25.031535 | orchestrator | 2025-09-19 07:18:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:28.081309 | orchestrator | 2025-09-19 07:18:28 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:28.081415 | orchestrator | 2025-09-19 07:18:28 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:28.081433 | orchestrator | 2025-09-19 07:18:28 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:28.082292 | orchestrator | 2025-09-19 07:18:28 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:28.082347 | orchestrator | 2025-09-19 07:18:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:31.115791 | orchestrator | 2025-09-19 07:18:31 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:31.116137 | orchestrator | 2025-09-19 07:18:31 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:31.117334 | orchestrator | 2025-09-19 07:18:31 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:31.118370 | orchestrator | 2025-09-19 07:18:31 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:31.118475 | orchestrator | 2025-09-19 07:18:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:34.153918 | orchestrator | 2025-09-19 07:18:34 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:34.154221 | orchestrator | 2025-09-19 07:18:34 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:34.154785 | orchestrator | 2025-09-19 07:18:34 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:34.155801 | orchestrator | 2025-09-19 07:18:34 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:34.155853 | orchestrator | 2025-09-19 07:18:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:37.182258 | orchestrator | 2025-09-19 07:18:37 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:37.182464 | orchestrator | 2025-09-19 07:18:37 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:37.184340 | orchestrator | 2025-09-19 07:18:37 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:37.185082 | orchestrator | 2025-09-19 07:18:37 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:37.185106 | orchestrator | 2025-09-19 07:18:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:40.215296 | orchestrator | 2025-09-19 07:18:40 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:40.215597 | orchestrator | 2025-09-19 07:18:40 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:40.216434 | orchestrator | 2025-09-19 07:18:40 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:40.217234 | orchestrator | 2025-09-19 07:18:40 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:40.217270 | orchestrator | 2025-09-19 07:18:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:43.269251 | orchestrator | 2025-09-19 07:18:43 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:43.269651 | orchestrator | 2025-09-19 07:18:43 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:43.270169 | orchestrator | 2025-09-19 07:18:43 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:43.271648 | orchestrator | 2025-09-19 07:18:43 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:43.271683 | orchestrator | 2025-09-19 07:18:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:46.305219 | orchestrator | 2025-09-19 07:18:46 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:46.306164 | orchestrator | 2025-09-19 07:18:46 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:46.306216 | orchestrator | 2025-09-19 07:18:46 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:46.306577 | orchestrator | 2025-09-19 07:18:46 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:46.306589 | orchestrator | 2025-09-19 07:18:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:49.334646 | orchestrator | 2025-09-19 07:18:49 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:49.335315 | orchestrator | 2025-09-19 07:18:49 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:49.335869 | orchestrator | 2025-09-19 07:18:49 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:49.336462 | orchestrator | 2025-09-19 07:18:49 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:49.336524 | orchestrator | 2025-09-19 07:18:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:52.359857 | orchestrator | 2025-09-19 07:18:52 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:52.359942 | orchestrator | 2025-09-19 07:18:52 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:52.360435 | orchestrator | 2025-09-19 07:18:52 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:52.361131 | orchestrator | 2025-09-19 07:18:52 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:52.361154 | orchestrator | 2025-09-19 07:18:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:55.398938 | orchestrator | 2025-09-19 07:18:55 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state STARTED 2025-09-19 07:18:55.399352 | orchestrator | 2025-09-19 07:18:55 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:55.400172 | orchestrator | 2025-09-19 07:18:55 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:55.400953 | orchestrator | 2025-09-19 07:18:55 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:55.400969 | orchestrator | 2025-09-19 07:18:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:58.434329 | orchestrator | 2025-09-19 07:18:58 | INFO  | Task 8c044c66-4d38-40b0-9021-77464c3aaaaf is in state SUCCESS 2025-09-19 07:18:58.435099 | orchestrator | 2025-09-19 07:18:58.435141 | orchestrator | 2025-09-19 07:18:58.435154 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:18:58.435246 | orchestrator | 2025-09-19 07:18:58.435265 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:18:58.435277 | orchestrator | Friday 19 September 2025 07:15:01 +0000 (0:00:00.262) 0:00:00.262 ****** 2025-09-19 07:18:58.435288 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:18:58.435300 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:18:58.435311 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:18:58.435322 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:18:58.435447 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:18:58.436093 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:18:58.436107 | orchestrator | 2025-09-19 07:18:58.436119 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:18:58.436130 | orchestrator | Friday 19 September 2025 07:15:02 +0000 (0:00:00.892) 0:00:01.155 ****** 2025-09-19 07:18:58.436141 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-19 07:18:58.436153 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-19 07:18:58.436164 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-19 07:18:58.436176 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-19 07:18:58.436202 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-19 07:18:58.436235 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-19 07:18:58.436247 | orchestrator | 2025-09-19 07:18:58.436258 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-19 07:18:58.436270 | orchestrator | 2025-09-19 07:18:58.436281 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 07:18:58.436292 | orchestrator | Friday 19 September 2025 07:15:03 +0000 (0:00:00.821) 0:00:01.977 ****** 2025-09-19 07:18:58.436304 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:18:58.436338 | orchestrator | 2025-09-19 07:18:58.436350 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-19 07:18:58.436361 | orchestrator | Friday 19 September 2025 07:15:04 +0000 (0:00:00.942) 0:00:02.920 ****** 2025-09-19 07:18:58.436373 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-19 07:18:58.436384 | orchestrator | 2025-09-19 07:18:58.436395 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-19 07:18:58.436406 | orchestrator | Friday 19 September 2025 07:15:07 +0000 (0:00:03.443) 0:00:06.363 ****** 2025-09-19 07:18:58.436417 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-19 07:18:58.436428 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-19 07:18:58.436439 | orchestrator | 2025-09-19 07:18:58.436450 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-19 07:18:58.436461 | orchestrator | Friday 19 September 2025 07:15:14 +0000 (0:00:06.578) 0:00:12.942 ****** 2025-09-19 07:18:58.436472 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:18:58.436484 | orchestrator | 2025-09-19 07:18:58.436495 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-19 07:18:58.436506 | orchestrator | Friday 19 September 2025 07:15:17 +0000 (0:00:02.751) 0:00:15.694 ****** 2025-09-19 07:18:58.436517 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:18:58.436528 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-19 07:18:58.436539 | orchestrator | 2025-09-19 07:18:58.436550 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-19 07:18:58.436561 | orchestrator | Friday 19 September 2025 07:15:20 +0000 (0:00:03.578) 0:00:19.273 ****** 2025-09-19 07:18:58.436572 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:18:58.436583 | orchestrator | 2025-09-19 07:18:58.436595 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-19 07:18:58.436606 | orchestrator | Friday 19 September 2025 07:15:23 +0000 (0:00:03.154) 0:00:22.428 ****** 2025-09-19 07:18:58.436617 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-19 07:18:58.436628 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-19 07:18:58.436639 | orchestrator | 2025-09-19 07:18:58.436650 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-19 07:18:58.436661 | orchestrator | Friday 19 September 2025 07:15:30 +0000 (0:00:07.127) 0:00:29.555 ****** 2025-09-19 07:18:58.436744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.436780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.436794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.436808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.436822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.436836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.436888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.436907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.436920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.436932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.436944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.436990 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.437004 | orchestrator | 2025-09-19 07:18:58.437016 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 07:18:58.437027 | orchestrator | Friday 19 September 2025 07:15:33 +0000 (0:00:02.634) 0:00:32.189 ****** 2025-09-19 07:18:58.437038 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.437049 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:18:58.437060 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:18:58.437071 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:18:58.437082 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:18:58.437093 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:18:58.437104 | orchestrator | 2025-09-19 07:18:58.437115 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 07:18:58.437126 | orchestrator | Friday 19 September 2025 07:15:34 +0000 (0:00:00.614) 0:00:32.804 ****** 2025-09-19 07:18:58.437137 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.437148 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:18:58.437159 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:18:58.437175 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:18:58.437187 | orchestrator | 2025-09-19 07:18:58.437198 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-19 07:18:58.437209 | orchestrator | Friday 19 September 2025 07:15:35 +0000 (0:00:01.750) 0:00:34.554 ****** 2025-09-19 07:18:58.437220 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-19 07:18:58.437231 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-19 07:18:58.437242 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-19 07:18:58.437253 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-19 07:18:58.437264 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-19 07:18:58.437275 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-19 07:18:58.437286 | orchestrator | 2025-09-19 07:18:58.437297 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-19 07:18:58.437308 | orchestrator | Friday 19 September 2025 07:15:38 +0000 (0:00:02.152) 0:00:36.707 ****** 2025-09-19 07:18:58.437320 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:18:58.437333 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:18:58.437379 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:18:58.437393 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:18:58.437410 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:18:58.437422 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:18:58.437434 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:18:58.437480 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:18:58.437498 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:18:58.437511 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:18:58.437522 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:18:58.437540 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:18:58.437552 | orchestrator | 2025-09-19 07:18:58.437563 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-19 07:18:58.437574 | orchestrator | Friday 19 September 2025 07:15:42 +0000 (0:00:04.019) 0:00:40.726 ****** 2025-09-19 07:18:58.437585 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:18:58.437597 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:18:58.437608 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:18:58.437619 | orchestrator | 2025-09-19 07:18:58.437630 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-19 07:18:58.437641 | orchestrator | Friday 19 September 2025 07:15:45 +0000 (0:00:03.401) 0:00:44.128 ****** 2025-09-19 07:18:58.437679 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-19 07:18:58.437710 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-19 07:18:58.437722 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-19 07:18:58.437734 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:18:58.437745 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:18:58.437756 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:18:58.437767 | orchestrator | 2025-09-19 07:18:58.437778 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-19 07:18:58.437789 | orchestrator | Friday 19 September 2025 07:15:48 +0000 (0:00:03.266) 0:00:47.395 ****** 2025-09-19 07:18:58.437800 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-19 07:18:58.437811 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-19 07:18:58.437822 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-19 07:18:58.437833 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-19 07:18:58.437844 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-19 07:18:58.437866 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-19 07:18:58.437877 | orchestrator | 2025-09-19 07:18:58.437889 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-19 07:18:58.437900 | orchestrator | Friday 19 September 2025 07:15:49 +0000 (0:00:01.065) 0:00:48.460 ****** 2025-09-19 07:18:58.437911 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.437922 | orchestrator | 2025-09-19 07:18:58.437933 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-19 07:18:58.437944 | orchestrator | Friday 19 September 2025 07:15:50 +0000 (0:00:00.175) 0:00:48.635 ****** 2025-09-19 07:18:58.437955 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.437966 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:18:58.437977 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:18:58.437995 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:18:58.438006 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:18:58.438056 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:18:58.438071 | orchestrator | 2025-09-19 07:18:58.438082 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 07:18:58.438093 | orchestrator | Friday 19 September 2025 07:15:51 +0000 (0:00:00.936) 0:00:49.571 ****** 2025-09-19 07:18:58.438105 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:18:58.438117 | orchestrator | 2025-09-19 07:18:58.438128 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-19 07:18:58.438139 | orchestrator | Friday 19 September 2025 07:15:52 +0000 (0:00:01.230) 0:00:50.802 ****** 2025-09-19 07:18:58.438151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.438163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.438211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.438237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.438256 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.438333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.438346 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.438387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.438407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.438429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.438441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.438452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.438464 | orchestrator | 2025-09-19 07:18:58.438475 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-19 07:18:58.438486 | orchestrator | Friday 19 September 2025 07:15:56 +0000 (0:00:04.353) 0:00:55.155 ****** 2025-09-19 07:18:58.438504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:18:58.438516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:18:58.438551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438562 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.438573 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:18:58.438585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:18:58.438597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438608 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:18:58.438628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438662 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:18:58.438674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438717 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:18:58.438728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438768 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:18:58.438779 | orchestrator | 2025-09-19 07:18:58.438790 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-19 07:18:58.438802 | orchestrator | Friday 19 September 2025 07:15:57 +0000 (0:00:01.399) 0:00:56.555 ****** 2025-09-19 07:18:58.438818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:18:58.438830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:18:58.438853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438865 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.438883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:18:58.438910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438922 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:18:58.438933 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:18:58.438945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.438970 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:18:58.438984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.439005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.439024 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:18:58.439135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.439155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.439168 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:18:58.439180 | orchestrator | 2025-09-19 07:18:58.439192 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-19 07:18:58.439203 | orchestrator | Friday 19 September 2025 07:15:59 +0000 (0:00:01.829) 0:00:58.385 ****** 2025-09-19 07:18:58.439215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.439227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.439254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.439272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439360 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439401 | orchestrator | 2025-09-19 07:18:58.439412 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-19 07:18:58.439423 | orchestrator | Friday 19 September 2025 07:16:02 +0000 (0:00:02.933) 0:01:01.319 ****** 2025-09-19 07:18:58.439435 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 07:18:58.439446 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:18:58.439457 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 07:18:58.439468 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:18:58.439479 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 07:18:58.439490 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:18:58.439501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 07:18:58.439513 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 07:18:58.439530 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 07:18:58.439541 | orchestrator | 2025-09-19 07:18:58.439552 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-19 07:18:58.439563 | orchestrator | Friday 19 September 2025 07:16:05 +0000 (0:00:02.500) 0:01:03.819 ****** 2025-09-19 07:18:58.439579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.439591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.439603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.439667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.439800 | orchestrator | 2025-09-19 07:18:58.439812 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-19 07:18:58.439823 | orchestrator | Friday 19 September 2025 07:16:18 +0000 (0:00:13.529) 0:01:17.349 ****** 2025-09-19 07:18:58.439834 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.439845 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:18:58.439856 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:18:58.439867 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:18:58.439878 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:18:58.439889 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:18:58.439900 | orchestrator | 2025-09-19 07:18:58.439911 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-19 07:18:58.439922 | orchestrator | Friday 19 September 2025 07:16:22 +0000 (0:00:03.390) 0:01:20.739 ****** 2025-09-19 07:18:58.439934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:18:58.439951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:18:58.439968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.439981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.439992 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:18:58.440008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:18:58.440020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.440037 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.440048 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:18:58.440060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.440072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.440084 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:18:58.440101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.440118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.440130 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:18:58.440141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.440161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:18:58.440173 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:18:58.440184 | orchestrator | 2025-09-19 07:18:58.440196 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-19 07:18:58.440207 | orchestrator | Friday 19 September 2025 07:16:23 +0000 (0:00:01.134) 0:01:21.874 ****** 2025-09-19 07:18:58.440218 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.440229 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:18:58.440240 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:18:58.440251 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:18:58.440262 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:18:58.440273 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:18:58.440284 | orchestrator | 2025-09-19 07:18:58.440296 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-19 07:18:58.440311 | orchestrator | Friday 19 September 2025 07:16:23 +0000 (0:00:00.535) 0:01:22.410 ****** 2025-09-19 07:18:58.440329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.440346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.440365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:18:58.440488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.440501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.440521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.440538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.440558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.440570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.440581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.440593 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.440611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:18:58.440623 | orchestrator | 2025-09-19 07:18:58.440634 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 07:18:58.440646 | orchestrator | Friday 19 September 2025 07:16:26 +0000 (0:00:02.156) 0:01:24.566 ****** 2025-09-19 07:18:58.440664 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.440679 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:18:58.440718 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:18:58.440730 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:18:58.440741 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:18:58.440752 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:18:58.440763 | orchestrator | 2025-09-19 07:18:58.440774 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-19 07:18:58.440785 | orchestrator | Friday 19 September 2025 07:16:26 +0000 (0:00:00.763) 0:01:25.330 ****** 2025-09-19 07:18:58.440796 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:18:58.440808 | orchestrator | 2025-09-19 07:18:58.440818 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-19 07:18:58.440830 | orchestrator | Friday 19 September 2025 07:16:28 +0000 (0:00:02.027) 0:01:27.358 ****** 2025-09-19 07:18:58.440841 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:18:58.440852 | orchestrator | 2025-09-19 07:18:58.440863 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-19 07:18:58.440874 | orchestrator | Friday 19 September 2025 07:16:31 +0000 (0:00:02.253) 0:01:29.611 ****** 2025-09-19 07:18:58.440885 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:18:58.440896 | orchestrator | 2025-09-19 07:18:58.440907 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:18:58.440918 | orchestrator | Friday 19 September 2025 07:16:52 +0000 (0:00:21.624) 0:01:51.235 ****** 2025-09-19 07:18:58.440929 | orchestrator | 2025-09-19 07:18:58.440940 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:18:58.440951 | orchestrator | Friday 19 September 2025 07:16:52 +0000 (0:00:00.220) 0:01:51.455 ****** 2025-09-19 07:18:58.440962 | orchestrator | 2025-09-19 07:18:58.440973 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:18:58.440984 | orchestrator | Friday 19 September 2025 07:16:53 +0000 (0:00:00.146) 0:01:51.602 ****** 2025-09-19 07:18:58.440995 | orchestrator | 2025-09-19 07:18:58.441006 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:18:58.441017 | orchestrator | Friday 19 September 2025 07:16:53 +0000 (0:00:00.163) 0:01:51.765 ****** 2025-09-19 07:18:58.441028 | orchestrator | 2025-09-19 07:18:58.441039 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:18:58.441050 | orchestrator | Friday 19 September 2025 07:16:53 +0000 (0:00:00.108) 0:01:51.874 ****** 2025-09-19 07:18:58.441061 | orchestrator | 2025-09-19 07:18:58.441072 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:18:58.441083 | orchestrator | Friday 19 September 2025 07:16:53 +0000 (0:00:00.149) 0:01:52.024 ****** 2025-09-19 07:18:58.441094 | orchestrator | 2025-09-19 07:18:58.441105 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-19 07:18:58.441119 | orchestrator | Friday 19 September 2025 07:16:53 +0000 (0:00:00.093) 0:01:52.118 ****** 2025-09-19 07:18:58.441131 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:18:58.441143 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:18:58.441156 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:18:58.441169 | orchestrator | 2025-09-19 07:18:58.441182 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-19 07:18:58.441195 | orchestrator | Friday 19 September 2025 07:17:20 +0000 (0:00:26.872) 0:02:18.990 ****** 2025-09-19 07:18:58.441207 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:18:58.441220 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:18:58.441232 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:18:58.441244 | orchestrator | 2025-09-19 07:18:58.441257 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-19 07:18:58.441270 | orchestrator | Friday 19 September 2025 07:17:30 +0000 (0:00:09.792) 0:02:28.783 ****** 2025-09-19 07:18:58.441289 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:18:58.441302 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:18:58.441314 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:18:58.441325 | orchestrator | 2025-09-19 07:18:58.441336 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-19 07:18:58.441347 | orchestrator | Friday 19 September 2025 07:18:40 +0000 (0:01:10.456) 0:03:39.239 ****** 2025-09-19 07:18:58.441358 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:18:58.441369 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:18:58.441380 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:18:58.441391 | orchestrator | 2025-09-19 07:18:58.441402 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-19 07:18:58.441413 | orchestrator | Friday 19 September 2025 07:18:55 +0000 (0:00:14.564) 0:03:53.804 ****** 2025-09-19 07:18:58.441424 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:18:58.441435 | orchestrator | 2025-09-19 07:18:58.441446 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:18:58.441463 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:18:58.441476 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 07:18:58.441487 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 07:18:58.441498 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 07:18:58.441509 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 07:18:58.441525 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 07:18:58.441536 | orchestrator | 2025-09-19 07:18:58.441547 | orchestrator | 2025-09-19 07:18:58.441559 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:18:58.441570 | orchestrator | Friday 19 September 2025 07:18:56 +0000 (0:00:01.292) 0:03:55.096 ****** 2025-09-19 07:18:58.441581 | orchestrator | =============================================================================== 2025-09-19 07:18:58.441592 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 70.46s 2025-09-19 07:18:58.441603 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.87s 2025-09-19 07:18:58.441614 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.62s 2025-09-19 07:18:58.441625 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 14.56s 2025-09-19 07:18:58.441637 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.53s 2025-09-19 07:18:58.441648 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.79s 2025-09-19 07:18:58.441659 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.13s 2025-09-19 07:18:58.441670 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.58s 2025-09-19 07:18:58.441681 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.35s 2025-09-19 07:18:58.441706 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.02s 2025-09-19 07:18:58.441717 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.58s 2025-09-19 07:18:58.441728 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.44s 2025-09-19 07:18:58.441739 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.40s 2025-09-19 07:18:58.441750 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.39s 2025-09-19 07:18:58.441768 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.27s 2025-09-19 07:18:58.441779 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.16s 2025-09-19 07:18:58.441790 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.93s 2025-09-19 07:18:58.441801 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.75s 2025-09-19 07:18:58.441813 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.63s 2025-09-19 07:18:58.441824 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.50s 2025-09-19 07:18:58.441835 | orchestrator | 2025-09-19 07:18:58 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:18:58.441846 | orchestrator | 2025-09-19 07:18:58 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:18:58.441857 | orchestrator | 2025-09-19 07:18:58 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:18:58.441869 | orchestrator | 2025-09-19 07:18:58 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:18:58.441880 | orchestrator | 2025-09-19 07:18:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:01.483275 | orchestrator | 2025-09-19 07:19:01 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:01.484335 | orchestrator | 2025-09-19 07:19:01 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:01.485179 | orchestrator | 2025-09-19 07:19:01 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:01.485987 | orchestrator | 2025-09-19 07:19:01 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:01.486207 | orchestrator | 2025-09-19 07:19:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:04.524605 | orchestrator | 2025-09-19 07:19:04 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:04.526968 | orchestrator | 2025-09-19 07:19:04 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:04.528825 | orchestrator | 2025-09-19 07:19:04 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:04.532367 | orchestrator | 2025-09-19 07:19:04 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:04.532428 | orchestrator | 2025-09-19 07:19:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:07.564136 | orchestrator | 2025-09-19 07:19:07 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:07.564900 | orchestrator | 2025-09-19 07:19:07 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:07.565902 | orchestrator | 2025-09-19 07:19:07 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:07.566928 | orchestrator | 2025-09-19 07:19:07 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:07.567191 | orchestrator | 2025-09-19 07:19:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:10.603116 | orchestrator | 2025-09-19 07:19:10 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:10.603212 | orchestrator | 2025-09-19 07:19:10 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:10.603271 | orchestrator | 2025-09-19 07:19:10 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:10.603285 | orchestrator | 2025-09-19 07:19:10 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:10.603318 | orchestrator | 2025-09-19 07:19:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:13.648655 | orchestrator | 2025-09-19 07:19:13 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:13.649760 | orchestrator | 2025-09-19 07:19:13 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:13.651186 | orchestrator | 2025-09-19 07:19:13 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:13.652611 | orchestrator | 2025-09-19 07:19:13 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:13.652889 | orchestrator | 2025-09-19 07:19:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:16.681617 | orchestrator | 2025-09-19 07:19:16 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:16.681861 | orchestrator | 2025-09-19 07:19:16 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:16.683568 | orchestrator | 2025-09-19 07:19:16 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:16.684626 | orchestrator | 2025-09-19 07:19:16 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:16.684737 | orchestrator | 2025-09-19 07:19:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:19.732629 | orchestrator | 2025-09-19 07:19:19 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:19.734756 | orchestrator | 2025-09-19 07:19:19 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:19.736760 | orchestrator | 2025-09-19 07:19:19 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:19.738498 | orchestrator | 2025-09-19 07:19:19 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:19.738722 | orchestrator | 2025-09-19 07:19:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:22.780965 | orchestrator | 2025-09-19 07:19:22 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:22.781066 | orchestrator | 2025-09-19 07:19:22 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:22.782909 | orchestrator | 2025-09-19 07:19:22 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:22.783435 | orchestrator | 2025-09-19 07:19:22 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:22.783456 | orchestrator | 2025-09-19 07:19:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:25.814360 | orchestrator | 2025-09-19 07:19:25 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:25.814487 | orchestrator | 2025-09-19 07:19:25 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:25.814905 | orchestrator | 2025-09-19 07:19:25 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:25.815716 | orchestrator | 2025-09-19 07:19:25 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:25.815743 | orchestrator | 2025-09-19 07:19:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:28.847563 | orchestrator | 2025-09-19 07:19:28 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:28.847793 | orchestrator | 2025-09-19 07:19:28 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:28.848416 | orchestrator | 2025-09-19 07:19:28 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:28.849102 | orchestrator | 2025-09-19 07:19:28 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:28.849128 | orchestrator | 2025-09-19 07:19:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:31.875565 | orchestrator | 2025-09-19 07:19:31 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:31.877240 | orchestrator | 2025-09-19 07:19:31 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:31.878280 | orchestrator | 2025-09-19 07:19:31 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:31.879140 | orchestrator | 2025-09-19 07:19:31 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:31.879188 | orchestrator | 2025-09-19 07:19:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:34.913447 | orchestrator | 2025-09-19 07:19:34 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:34.913551 | orchestrator | 2025-09-19 07:19:34 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:34.914111 | orchestrator | 2025-09-19 07:19:34 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state STARTED 2025-09-19 07:19:34.914812 | orchestrator | 2025-09-19 07:19:34 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:34.914836 | orchestrator | 2025-09-19 07:19:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:37.935591 | orchestrator | 2025-09-19 07:19:37 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:19:37.935773 | orchestrator | 2025-09-19 07:19:37 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:37.936199 | orchestrator | 2025-09-19 07:19:37 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:37.937464 | orchestrator | 2025-09-19 07:19:37 | INFO  | Task 6ac6db3b-078f-442e-9ae0-26321471c32e is in state SUCCESS 2025-09-19 07:19:37.939316 | orchestrator | 2025-09-19 07:19:37.939424 | orchestrator | 2025-09-19 07:19:37.939449 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:19:37.939470 | orchestrator | 2025-09-19 07:19:37.939488 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:19:37.939507 | orchestrator | Friday 19 September 2025 07:17:40 +0000 (0:00:00.281) 0:00:00.281 ****** 2025-09-19 07:19:37.939525 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:19:37.939547 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:19:37.939566 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:19:37.939586 | orchestrator | 2025-09-19 07:19:37.939606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:19:37.939627 | orchestrator | Friday 19 September 2025 07:17:41 +0000 (0:00:00.320) 0:00:00.601 ****** 2025-09-19 07:19:37.939646 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-19 07:19:37.939667 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-19 07:19:37.939732 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-19 07:19:37.939751 | orchestrator | 2025-09-19 07:19:37.939770 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-19 07:19:37.939788 | orchestrator | 2025-09-19 07:19:37.939807 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 07:19:37.939827 | orchestrator | Friday 19 September 2025 07:17:41 +0000 (0:00:00.491) 0:00:01.092 ****** 2025-09-19 07:19:37.939845 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:19:37.939902 | orchestrator | 2025-09-19 07:19:37.939923 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-19 07:19:37.939942 | orchestrator | Friday 19 September 2025 07:17:42 +0000 (0:00:00.547) 0:00:01.640 ****** 2025-09-19 07:19:37.939961 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-19 07:19:37.939980 | orchestrator | 2025-09-19 07:19:37.939998 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-19 07:19:37.940018 | orchestrator | Friday 19 September 2025 07:17:45 +0000 (0:00:03.570) 0:00:05.210 ****** 2025-09-19 07:19:37.940039 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-19 07:19:37.940060 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-19 07:19:37.940079 | orchestrator | 2025-09-19 07:19:37.940099 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-19 07:19:37.940118 | orchestrator | Friday 19 September 2025 07:17:52 +0000 (0:00:06.402) 0:00:11.613 ****** 2025-09-19 07:19:37.940137 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:19:37.940156 | orchestrator | 2025-09-19 07:19:37.940176 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-19 07:19:37.940195 | orchestrator | Friday 19 September 2025 07:17:55 +0000 (0:00:03.209) 0:00:14.823 ****** 2025-09-19 07:19:37.940213 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:19:37.940231 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-19 07:19:37.940242 | orchestrator | 2025-09-19 07:19:37.940253 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-19 07:19:37.940264 | orchestrator | Friday 19 September 2025 07:17:59 +0000 (0:00:03.896) 0:00:18.720 ****** 2025-09-19 07:19:37.940275 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:19:37.940287 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-19 07:19:37.940314 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-19 07:19:37.940326 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-19 07:19:37.940337 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-19 07:19:37.940348 | orchestrator | 2025-09-19 07:19:37.940359 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-19 07:19:37.940370 | orchestrator | Friday 19 September 2025 07:18:13 +0000 (0:00:14.850) 0:00:33.570 ****** 2025-09-19 07:19:37.940382 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-19 07:19:37.940393 | orchestrator | 2025-09-19 07:19:37.940409 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-19 07:19:37.940434 | orchestrator | Friday 19 September 2025 07:18:18 +0000 (0:00:04.442) 0:00:38.013 ****** 2025-09-19 07:19:37.940464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.940510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.940547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.940568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.940596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.940609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.940633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.940658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.940670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.940729 | orchestrator | 2025-09-19 07:19:37.940741 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-19 07:19:37.940753 | orchestrator | Friday 19 September 2025 07:18:20 +0000 (0:00:02.054) 0:00:40.068 ****** 2025-09-19 07:19:37.940764 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-19 07:19:37.940775 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-19 07:19:37.940786 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-19 07:19:37.940797 | orchestrator | 2025-09-19 07:19:37.940808 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-19 07:19:37.940819 | orchestrator | Friday 19 September 2025 07:18:21 +0000 (0:00:01.239) 0:00:41.308 ****** 2025-09-19 07:19:37.940831 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:37.940842 | orchestrator | 2025-09-19 07:19:37.940853 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-19 07:19:37.940864 | orchestrator | Friday 19 September 2025 07:18:21 +0000 (0:00:00.132) 0:00:41.440 ****** 2025-09-19 07:19:37.940875 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:37.940886 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:37.940897 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:37.940908 | orchestrator | 2025-09-19 07:19:37.940919 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 07:19:37.940936 | orchestrator | Friday 19 September 2025 07:18:22 +0000 (0:00:00.484) 0:00:41.925 ****** 2025-09-19 07:19:37.940948 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:19:37.940960 | orchestrator | 2025-09-19 07:19:37.940971 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-19 07:19:37.940982 | orchestrator | Friday 19 September 2025 07:18:22 +0000 (0:00:00.552) 0:00:42.477 ****** 2025-09-19 07:19:37.940994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.941023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.941036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.941048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941174 | orchestrator | 2025-09-19 07:19:37.941195 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-19 07:19:37.941214 | orchestrator | Friday 19 September 2025 07:18:26 +0000 (0:00:03.477) 0:00:45.954 ****** 2025-09-19 07:19:37.941232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:19:37.941251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941284 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:37.941305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:19:37.941317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941340 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:37.941357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:19:37.941376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941399 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:37.941410 | orchestrator | 2025-09-19 07:19:37.941428 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-19 07:19:37.941440 | orchestrator | Friday 19 September 2025 07:18:27 +0000 (0:00:00.863) 0:00:46.818 ****** 2025-09-19 07:19:37.941452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:19:37.941463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941492 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:37.941508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:19:37.941520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941551 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:37.941563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:19:37.941574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.941609 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:37.941620 | orchestrator | 2025-09-19 07:19:37.941631 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-19 07:19:37.941643 | orchestrator | Friday 19 September 2025 07:18:28 +0000 (0:00:01.521) 0:00:48.339 ****** 2025-09-19 07:19:37.941654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.941727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.941742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.941754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.941848 | orchestrator | 2025-09-19 07:19:37.941860 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-19 07:19:37.941871 | orchestrator | Friday 19 September 2025 07:18:33 +0000 (0:00:04.449) 0:00:52.789 ****** 2025-09-19 07:19:37.941882 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:37.941893 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:37.941911 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:37.941922 | orchestrator | 2025-09-19 07:19:37.941933 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-19 07:19:37.941943 | orchestrator | Friday 19 September 2025 07:18:35 +0000 (0:00:02.246) 0:00:55.036 ****** 2025-09-19 07:19:37.941953 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:19:37.941963 | orchestrator | 2025-09-19 07:19:37.941972 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-19 07:19:37.941982 | orchestrator | Friday 19 September 2025 07:18:36 +0000 (0:00:01.277) 0:00:56.313 ****** 2025-09-19 07:19:37.941992 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:37.942002 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:37.942012 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:37.942077 | orchestrator | 2025-09-19 07:19:37.942088 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-19 07:19:37.942098 | orchestrator | Friday 19 September 2025 07:18:37 +0000 (0:00:00.678) 0:00:56.991 ****** 2025-09-19 07:19:37.942113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.942132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.942143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.942154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942246 | orchestrator | 2025-09-19 07:19:37.942257 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-19 07:19:37.942267 | orchestrator | Friday 19 September 2025 07:18:48 +0000 (0:00:11.434) 0:01:08.425 ****** 2025-09-19 07:19:37.942283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:19:37.942298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.942308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.942319 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:37.942335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:19:37.942346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.942362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.942372 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:37.942382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:19:37.942397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.942407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:19:37.942417 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:37.942427 | orchestrator | 2025-09-19 07:19:37.942437 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-19 07:19:37.942447 | orchestrator | Friday 19 September 2025 07:18:49 +0000 (0:00:01.065) 0:01:09.491 ****** 2025-09-19 07:19:37.942465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.942481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.942495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:19:37.942506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:19:37.942600 | orchestrator | 2025-09-19 07:19:37.942610 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 07:19:37.942620 | orchestrator | Friday 19 September 2025 07:18:52 +0000 (0:00:02.925) 0:01:12.416 ****** 2025-09-19 07:19:37.942630 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:37.942640 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:37.942654 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:37.942665 | orchestrator | 2025-09-19 07:19:37.942693 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-19 07:19:37.942703 | orchestrator | Friday 19 September 2025 07:18:53 +0000 (0:00:00.472) 0:01:12.889 ****** 2025-09-19 07:19:37.942713 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:37.942724 | orchestrator | 2025-09-19 07:19:37.942733 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-19 07:19:37.942743 | orchestrator | Friday 19 September 2025 07:18:55 +0000 (0:00:02.241) 0:01:15.130 ****** 2025-09-19 07:19:37.942753 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:37.942763 | orchestrator | 2025-09-19 07:19:37.942773 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-19 07:19:37.942783 | orchestrator | Friday 19 September 2025 07:18:58 +0000 (0:00:02.600) 0:01:17.730 ****** 2025-09-19 07:19:37.942793 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:37.942803 | orchestrator | 2025-09-19 07:19:37.942813 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 07:19:37.942823 | orchestrator | Friday 19 September 2025 07:19:10 +0000 (0:00:12.512) 0:01:30.243 ****** 2025-09-19 07:19:37.942833 | orchestrator | 2025-09-19 07:19:37.942843 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 07:19:37.942853 | orchestrator | Friday 19 September 2025 07:19:10 +0000 (0:00:00.061) 0:01:30.304 ****** 2025-09-19 07:19:37.942862 | orchestrator | 2025-09-19 07:19:37.942872 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 07:19:37.942889 | orchestrator | Friday 19 September 2025 07:19:10 +0000 (0:00:00.059) 0:01:30.363 ****** 2025-09-19 07:19:37.942899 | orchestrator | 2025-09-19 07:19:37.942909 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-19 07:19:37.942919 | orchestrator | Friday 19 September 2025 07:19:10 +0000 (0:00:00.061) 0:01:30.425 ****** 2025-09-19 07:19:37.942928 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:37.942938 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:37.942948 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:37.942958 | orchestrator | 2025-09-19 07:19:37.942968 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-19 07:19:37.942978 | orchestrator | Friday 19 September 2025 07:19:23 +0000 (0:00:12.660) 0:01:43.085 ****** 2025-09-19 07:19:37.942988 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:37.942998 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:37.943013 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:37.943023 | orchestrator | 2025-09-19 07:19:37.943033 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-19 07:19:37.943043 | orchestrator | Friday 19 September 2025 07:19:29 +0000 (0:00:06.086) 0:01:49.171 ****** 2025-09-19 07:19:37.943053 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:37.943063 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:37.943073 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:37.943083 | orchestrator | 2025-09-19 07:19:37.943093 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:19:37.943104 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:19:37.943115 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:19:37.943125 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:19:37.943135 | orchestrator | 2025-09-19 07:19:37.943145 | orchestrator | 2025-09-19 07:19:37.943155 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:19:37.943164 | orchestrator | Friday 19 September 2025 07:19:35 +0000 (0:00:05.726) 0:01:54.898 ****** 2025-09-19 07:19:37.943174 | orchestrator | =============================================================================== 2025-09-19 07:19:37.943184 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.85s 2025-09-19 07:19:37.943194 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.66s 2025-09-19 07:19:37.943204 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.51s 2025-09-19 07:19:37.943214 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.43s 2025-09-19 07:19:37.943224 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.40s 2025-09-19 07:19:37.943234 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.09s 2025-09-19 07:19:37.943244 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.73s 2025-09-19 07:19:37.943253 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.45s 2025-09-19 07:19:37.943263 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.44s 2025-09-19 07:19:37.943273 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.90s 2025-09-19 07:19:37.943283 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.57s 2025-09-19 07:19:37.943293 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.48s 2025-09-19 07:19:37.943303 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.21s 2025-09-19 07:19:37.943312 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.93s 2025-09-19 07:19:37.943328 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.60s 2025-09-19 07:19:37.943338 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.25s 2025-09-19 07:19:37.943348 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.24s 2025-09-19 07:19:37.943362 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.06s 2025-09-19 07:19:37.943372 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.52s 2025-09-19 07:19:37.943382 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.28s 2025-09-19 07:19:37.943392 | orchestrator | 2025-09-19 07:19:37 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:37.943402 | orchestrator | 2025-09-19 07:19:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:40.965241 | orchestrator | 2025-09-19 07:19:40 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:19:40.966805 | orchestrator | 2025-09-19 07:19:40 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:40.968243 | orchestrator | 2025-09-19 07:19:40 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:40.968961 | orchestrator | 2025-09-19 07:19:40 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:40.968997 | orchestrator | 2025-09-19 07:19:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:43.995154 | orchestrator | 2025-09-19 07:19:43 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:19:43.995281 | orchestrator | 2025-09-19 07:19:43 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:43.998645 | orchestrator | 2025-09-19 07:19:43 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:43.999098 | orchestrator | 2025-09-19 07:19:43 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:43.999124 | orchestrator | 2025-09-19 07:19:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:47.024830 | orchestrator | 2025-09-19 07:19:47 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:19:47.024954 | orchestrator | 2025-09-19 07:19:47 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:47.025487 | orchestrator | 2025-09-19 07:19:47 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:47.026976 | orchestrator | 2025-09-19 07:19:47 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:47.027028 | orchestrator | 2025-09-19 07:19:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:50.120187 | orchestrator | 2025-09-19 07:19:50 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:19:50.120300 | orchestrator | 2025-09-19 07:19:50 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:50.120321 | orchestrator | 2025-09-19 07:19:50 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:50.120354 | orchestrator | 2025-09-19 07:19:50 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:50.120373 | orchestrator | 2025-09-19 07:19:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:53.147117 | orchestrator | 2025-09-19 07:19:53 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:19:53.147328 | orchestrator | 2025-09-19 07:19:53 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:53.149370 | orchestrator | 2025-09-19 07:19:53 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:53.151790 | orchestrator | 2025-09-19 07:19:53 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:53.152225 | orchestrator | 2025-09-19 07:19:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:56.230336 | orchestrator | 2025-09-19 07:19:56 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:19:56.230866 | orchestrator | 2025-09-19 07:19:56 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:56.232057 | orchestrator | 2025-09-19 07:19:56 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:56.233134 | orchestrator | 2025-09-19 07:19:56 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:56.233158 | orchestrator | 2025-09-19 07:19:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:59.270411 | orchestrator | 2025-09-19 07:19:59 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:19:59.270988 | orchestrator | 2025-09-19 07:19:59 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:19:59.272024 | orchestrator | 2025-09-19 07:19:59 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:19:59.273012 | orchestrator | 2025-09-19 07:19:59 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:19:59.278650 | orchestrator | 2025-09-19 07:19:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:02.305634 | orchestrator | 2025-09-19 07:20:02 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:20:02.306402 | orchestrator | 2025-09-19 07:20:02 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:02.307597 | orchestrator | 2025-09-19 07:20:02 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:02.308764 | orchestrator | 2025-09-19 07:20:02 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:02.308848 | orchestrator | 2025-09-19 07:20:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:05.335260 | orchestrator | 2025-09-19 07:20:05 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:20:05.335520 | orchestrator | 2025-09-19 07:20:05 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:05.336199 | orchestrator | 2025-09-19 07:20:05 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:05.337026 | orchestrator | 2025-09-19 07:20:05 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:05.337056 | orchestrator | 2025-09-19 07:20:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:08.365180 | orchestrator | 2025-09-19 07:20:08 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:20:08.365614 | orchestrator | 2025-09-19 07:20:08 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:08.366573 | orchestrator | 2025-09-19 07:20:08 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:08.367614 | orchestrator | 2025-09-19 07:20:08 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:08.367815 | orchestrator | 2025-09-19 07:20:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:11.413199 | orchestrator | 2025-09-19 07:20:11 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:20:11.414183 | orchestrator | 2025-09-19 07:20:11 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:11.415415 | orchestrator | 2025-09-19 07:20:11 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:11.416734 | orchestrator | 2025-09-19 07:20:11 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:11.417113 | orchestrator | 2025-09-19 07:20:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:14.471634 | orchestrator | 2025-09-19 07:20:14 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:20:14.471793 | orchestrator | 2025-09-19 07:20:14 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:14.472079 | orchestrator | 2025-09-19 07:20:14 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:14.472650 | orchestrator | 2025-09-19 07:20:14 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:14.472788 | orchestrator | 2025-09-19 07:20:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:17.510163 | orchestrator | 2025-09-19 07:20:17 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:20:17.510848 | orchestrator | 2025-09-19 07:20:17 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:17.511515 | orchestrator | 2025-09-19 07:20:17 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:17.512850 | orchestrator | 2025-09-19 07:20:17 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:17.512874 | orchestrator | 2025-09-19 07:20:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:20.546454 | orchestrator | 2025-09-19 07:20:20 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:20:20.546693 | orchestrator | 2025-09-19 07:20:20 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:20.547719 | orchestrator | 2025-09-19 07:20:20 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:20.548073 | orchestrator | 2025-09-19 07:20:20 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:20.548210 | orchestrator | 2025-09-19 07:20:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:23.576581 | orchestrator | 2025-09-19 07:20:23 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:20:23.579800 | orchestrator | 2025-09-19 07:20:23 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:23.580736 | orchestrator | 2025-09-19 07:20:23 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:23.581822 | orchestrator | 2025-09-19 07:20:23 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:23.581846 | orchestrator | 2025-09-19 07:20:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:26.627087 | orchestrator | 2025-09-19 07:20:26 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state STARTED 2025-09-19 07:20:26.630609 | orchestrator | 2025-09-19 07:20:26 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:26.632803 | orchestrator | 2025-09-19 07:20:26 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:26.635816 | orchestrator | 2025-09-19 07:20:26 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:26.635890 | orchestrator | 2025-09-19 07:20:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:29.668627 | orchestrator | 2025-09-19 07:20:29 | INFO  | Task b076475d-f83a-49d4-95e5-806433b9d042 is in state SUCCESS 2025-09-19 07:20:29.669738 | orchestrator | 2025-09-19 07:20:29 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:29.670394 | orchestrator | 2025-09-19 07:20:29 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:29.671111 | orchestrator | 2025-09-19 07:20:29 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:29.671140 | orchestrator | 2025-09-19 07:20:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:32.723259 | orchestrator | 2025-09-19 07:20:32 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:32.725611 | orchestrator | 2025-09-19 07:20:32 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:32.726603 | orchestrator | 2025-09-19 07:20:32 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:20:32.729320 | orchestrator | 2025-09-19 07:20:32 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:32.729864 | orchestrator | 2025-09-19 07:20:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:35.758994 | orchestrator | 2025-09-19 07:20:35 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:35.759461 | orchestrator | 2025-09-19 07:20:35 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:35.760408 | orchestrator | 2025-09-19 07:20:35 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:20:35.761805 | orchestrator | 2025-09-19 07:20:35 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:35.761830 | orchestrator | 2025-09-19 07:20:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:38.807028 | orchestrator | 2025-09-19 07:20:38 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:38.807853 | orchestrator | 2025-09-19 07:20:38 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:38.809182 | orchestrator | 2025-09-19 07:20:38 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:20:38.810555 | orchestrator | 2025-09-19 07:20:38 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:38.810581 | orchestrator | 2025-09-19 07:20:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:41.846087 | orchestrator | 2025-09-19 07:20:41 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:41.846189 | orchestrator | 2025-09-19 07:20:41 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:41.846766 | orchestrator | 2025-09-19 07:20:41 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:20:41.849478 | orchestrator | 2025-09-19 07:20:41 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:41.849499 | orchestrator | 2025-09-19 07:20:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:44.876347 | orchestrator | 2025-09-19 07:20:44 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:44.878819 | orchestrator | 2025-09-19 07:20:44 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:44.881526 | orchestrator | 2025-09-19 07:20:44 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:20:44.883595 | orchestrator | 2025-09-19 07:20:44 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:44.883639 | orchestrator | 2025-09-19 07:20:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:47.920177 | orchestrator | 2025-09-19 07:20:47 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:47.920509 | orchestrator | 2025-09-19 07:20:47 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:47.921100 | orchestrator | 2025-09-19 07:20:47 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:20:47.921748 | orchestrator | 2025-09-19 07:20:47 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:47.921842 | orchestrator | 2025-09-19 07:20:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:50.957632 | orchestrator | 2025-09-19 07:20:50 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:50.958832 | orchestrator | 2025-09-19 07:20:50 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:50.962113 | orchestrator | 2025-09-19 07:20:50 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:20:50.965465 | orchestrator | 2025-09-19 07:20:50 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:50.965510 | orchestrator | 2025-09-19 07:20:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:54.084437 | orchestrator | 2025-09-19 07:20:54 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:54.084597 | orchestrator | 2025-09-19 07:20:54 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:54.086009 | orchestrator | 2025-09-19 07:20:54 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:20:54.086088 | orchestrator | 2025-09-19 07:20:54 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:54.086105 | orchestrator | 2025-09-19 07:20:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:57.123003 | orchestrator | 2025-09-19 07:20:57 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:20:57.124600 | orchestrator | 2025-09-19 07:20:57 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:20:57.126761 | orchestrator | 2025-09-19 07:20:57 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:20:57.129546 | orchestrator | 2025-09-19 07:20:57 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:20:57.129614 | orchestrator | 2025-09-19 07:20:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:00.163604 | orchestrator | 2025-09-19 07:21:00 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:00.163903 | orchestrator | 2025-09-19 07:21:00 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:00.166080 | orchestrator | 2025-09-19 07:21:00 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:00.166835 | orchestrator | 2025-09-19 07:21:00 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:00.166867 | orchestrator | 2025-09-19 07:21:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:03.201877 | orchestrator | 2025-09-19 07:21:03 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:03.202011 | orchestrator | 2025-09-19 07:21:03 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:03.202626 | orchestrator | 2025-09-19 07:21:03 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:03.207111 | orchestrator | 2025-09-19 07:21:03 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:03.207168 | orchestrator | 2025-09-19 07:21:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:06.230895 | orchestrator | 2025-09-19 07:21:06 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:06.231300 | orchestrator | 2025-09-19 07:21:06 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:06.232767 | orchestrator | 2025-09-19 07:21:06 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:06.232807 | orchestrator | 2025-09-19 07:21:06 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:06.232820 | orchestrator | 2025-09-19 07:21:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:09.256523 | orchestrator | 2025-09-19 07:21:09 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:09.256642 | orchestrator | 2025-09-19 07:21:09 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:09.257257 | orchestrator | 2025-09-19 07:21:09 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:09.257978 | orchestrator | 2025-09-19 07:21:09 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:09.258004 | orchestrator | 2025-09-19 07:21:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:12.292111 | orchestrator | 2025-09-19 07:21:12 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:12.292745 | orchestrator | 2025-09-19 07:21:12 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:12.293006 | orchestrator | 2025-09-19 07:21:12 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:12.293685 | orchestrator | 2025-09-19 07:21:12 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:12.293865 | orchestrator | 2025-09-19 07:21:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:15.318468 | orchestrator | 2025-09-19 07:21:15 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:15.318903 | orchestrator | 2025-09-19 07:21:15 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:15.319439 | orchestrator | 2025-09-19 07:21:15 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:15.320159 | orchestrator | 2025-09-19 07:21:15 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:15.320186 | orchestrator | 2025-09-19 07:21:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:18.359801 | orchestrator | 2025-09-19 07:21:18 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:18.360025 | orchestrator | 2025-09-19 07:21:18 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:18.360780 | orchestrator | 2025-09-19 07:21:18 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:18.361788 | orchestrator | 2025-09-19 07:21:18 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:18.361851 | orchestrator | 2025-09-19 07:21:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:21.402798 | orchestrator | 2025-09-19 07:21:21 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:21.404534 | orchestrator | 2025-09-19 07:21:21 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:21.406708 | orchestrator | 2025-09-19 07:21:21 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:21.408419 | orchestrator | 2025-09-19 07:21:21 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:21.408669 | orchestrator | 2025-09-19 07:21:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:24.447323 | orchestrator | 2025-09-19 07:21:24 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:24.449459 | orchestrator | 2025-09-19 07:21:24 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:24.451269 | orchestrator | 2025-09-19 07:21:24 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:24.453723 | orchestrator | 2025-09-19 07:21:24 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:24.453855 | orchestrator | 2025-09-19 07:21:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:27.503170 | orchestrator | 2025-09-19 07:21:27 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:27.504705 | orchestrator | 2025-09-19 07:21:27 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:27.507510 | orchestrator | 2025-09-19 07:21:27 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:27.510559 | orchestrator | 2025-09-19 07:21:27 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:27.510606 | orchestrator | 2025-09-19 07:21:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:30.551454 | orchestrator | 2025-09-19 07:21:30 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:30.554170 | orchestrator | 2025-09-19 07:21:30 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:30.556090 | orchestrator | 2025-09-19 07:21:30 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:30.557968 | orchestrator | 2025-09-19 07:21:30 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:30.558233 | orchestrator | 2025-09-19 07:21:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:33.593891 | orchestrator | 2025-09-19 07:21:33 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:33.594434 | orchestrator | 2025-09-19 07:21:33 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:33.595217 | orchestrator | 2025-09-19 07:21:33 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:33.596016 | orchestrator | 2025-09-19 07:21:33 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:33.596082 | orchestrator | 2025-09-19 07:21:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:36.620915 | orchestrator | 2025-09-19 07:21:36 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:36.621356 | orchestrator | 2025-09-19 07:21:36 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:36.622186 | orchestrator | 2025-09-19 07:21:36 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:36.622890 | orchestrator | 2025-09-19 07:21:36 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:36.623033 | orchestrator | 2025-09-19 07:21:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:39.667407 | orchestrator | 2025-09-19 07:21:39 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:39.668183 | orchestrator | 2025-09-19 07:21:39 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:39.670447 | orchestrator | 2025-09-19 07:21:39 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:39.673978 | orchestrator | 2025-09-19 07:21:39 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:39.674318 | orchestrator | 2025-09-19 07:21:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:42.717215 | orchestrator | 2025-09-19 07:21:42 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:42.719421 | orchestrator | 2025-09-19 07:21:42 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:42.722154 | orchestrator | 2025-09-19 07:21:42 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:42.725975 | orchestrator | 2025-09-19 07:21:42 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:42.726207 | orchestrator | 2025-09-19 07:21:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:45.765031 | orchestrator | 2025-09-19 07:21:45 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:45.766321 | orchestrator | 2025-09-19 07:21:45 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:45.767119 | orchestrator | 2025-09-19 07:21:45 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:45.768080 | orchestrator | 2025-09-19 07:21:45 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:45.768117 | orchestrator | 2025-09-19 07:21:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:48.826169 | orchestrator | 2025-09-19 07:21:48 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:48.827188 | orchestrator | 2025-09-19 07:21:48 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:48.829549 | orchestrator | 2025-09-19 07:21:48 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state STARTED 2025-09-19 07:21:48.831177 | orchestrator | 2025-09-19 07:21:48 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:48.831215 | orchestrator | 2025-09-19 07:21:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:51.891368 | orchestrator | 2025-09-19 07:21:51 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:51.893588 | orchestrator | 2025-09-19 07:21:51 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:51.896349 | orchestrator | 2025-09-19 07:21:51 | INFO  | Task 31abba9b-c723-4cd0-9d92-f2109a5b7b73 is in state SUCCESS 2025-09-19 07:21:51.898093 | orchestrator | 2025-09-19 07:21:51.898128 | orchestrator | 2025-09-19 07:21:51.898140 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-19 07:21:51.898151 | orchestrator | 2025-09-19 07:21:51.898163 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-19 07:21:51.898174 | orchestrator | Friday 19 September 2025 07:19:42 +0000 (0:00:00.160) 0:00:00.160 ****** 2025-09-19 07:21:51.898207 | orchestrator | changed: [localhost] 2025-09-19 07:21:51.898220 | orchestrator | 2025-09-19 07:21:51.898231 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-19 07:21:51.898242 | orchestrator | Friday 19 September 2025 07:19:43 +0000 (0:00:01.495) 0:00:01.655 ****** 2025-09-19 07:21:51.898253 | orchestrator | changed: [localhost] 2025-09-19 07:21:51.898264 | orchestrator | 2025-09-19 07:21:51.898276 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-19 07:21:51.898287 | orchestrator | Friday 19 September 2025 07:20:23 +0000 (0:00:39.441) 0:00:41.097 ****** 2025-09-19 07:21:51.898298 | orchestrator | changed: [localhost] 2025-09-19 07:21:51.898310 | orchestrator | 2025-09-19 07:21:51.898321 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:21:51.898332 | orchestrator | 2025-09-19 07:21:51.898343 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:21:51.898354 | orchestrator | Friday 19 September 2025 07:20:27 +0000 (0:00:04.262) 0:00:45.360 ****** 2025-09-19 07:21:51.898365 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:21:51.898376 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:21:51.898387 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:21:51.898398 | orchestrator | 2025-09-19 07:21:51.898410 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:21:51.898421 | orchestrator | Friday 19 September 2025 07:20:27 +0000 (0:00:00.356) 0:00:45.717 ****** 2025-09-19 07:21:51.898432 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-19 07:21:51.898443 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-19 07:21:51.898454 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-19 07:21:51.898466 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-19 07:21:51.898477 | orchestrator | 2025-09-19 07:21:51.898488 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-19 07:21:51.898499 | orchestrator | skipping: no hosts matched 2025-09-19 07:21:51.898510 | orchestrator | 2025-09-19 07:21:51.898521 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:21:51.898532 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:21:51.898545 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:21:51.898557 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:21:51.898568 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:21:51.898579 | orchestrator | 2025-09-19 07:21:51.898590 | orchestrator | 2025-09-19 07:21:51.898601 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:21:51.898612 | orchestrator | Friday 19 September 2025 07:20:28 +0000 (0:00:00.789) 0:00:46.506 ****** 2025-09-19 07:21:51.898623 | orchestrator | =============================================================================== 2025-09-19 07:21:51.898634 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 39.44s 2025-09-19 07:21:51.898645 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.26s 2025-09-19 07:21:51.898696 | orchestrator | Ensure the destination directory exists --------------------------------- 1.50s 2025-09-19 07:21:51.898711 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-09-19 07:21:51.898725 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-09-19 07:21:51.898737 | orchestrator | 2025-09-19 07:21:51.898750 | orchestrator | 2025-09-19 07:21:51.898763 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:21:51.898776 | orchestrator | 2025-09-19 07:21:51.898796 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:21:51.898810 | orchestrator | Friday 19 September 2025 07:20:34 +0000 (0:00:00.293) 0:00:00.293 ****** 2025-09-19 07:21:51.898843 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:21:51.898857 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:21:51.898870 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:21:51.898883 | orchestrator | 2025-09-19 07:21:51.898896 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:21:51.898909 | orchestrator | Friday 19 September 2025 07:20:34 +0000 (0:00:00.306) 0:00:00.599 ****** 2025-09-19 07:21:51.898922 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-19 07:21:51.898935 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-19 07:21:51.898948 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-19 07:21:51.898961 | orchestrator | 2025-09-19 07:21:51.898973 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-19 07:21:51.898986 | orchestrator | 2025-09-19 07:21:51.898999 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 07:21:51.899012 | orchestrator | Friday 19 September 2025 07:20:35 +0000 (0:00:01.144) 0:00:01.744 ****** 2025-09-19 07:21:51.899023 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:21:51.899034 | orchestrator | 2025-09-19 07:21:51.899046 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-19 07:21:51.899057 | orchestrator | Friday 19 September 2025 07:20:36 +0000 (0:00:00.904) 0:00:02.649 ****** 2025-09-19 07:21:51.899080 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-19 07:21:51.899092 | orchestrator | 2025-09-19 07:21:51.899104 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-19 07:21:51.899115 | orchestrator | Friday 19 September 2025 07:20:40 +0000 (0:00:03.636) 0:00:06.285 ****** 2025-09-19 07:21:51.899126 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-19 07:21:51.899137 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-19 07:21:51.899148 | orchestrator | 2025-09-19 07:21:51.899159 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-19 07:21:51.899170 | orchestrator | Friday 19 September 2025 07:20:47 +0000 (0:00:06.887) 0:00:13.173 ****** 2025-09-19 07:21:51.899181 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:21:51.899192 | orchestrator | 2025-09-19 07:21:51.899203 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-19 07:21:51.899214 | orchestrator | Friday 19 September 2025 07:20:50 +0000 (0:00:03.267) 0:00:16.441 ****** 2025-09-19 07:21:51.899225 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:21:51.899236 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-19 07:21:51.899247 | orchestrator | 2025-09-19 07:21:51.899259 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-19 07:21:51.899269 | orchestrator | Friday 19 September 2025 07:20:54 +0000 (0:00:04.010) 0:00:20.451 ****** 2025-09-19 07:21:51.899281 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:21:51.899292 | orchestrator | 2025-09-19 07:21:51.899303 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-19 07:21:51.899357 | orchestrator | Friday 19 September 2025 07:20:57 +0000 (0:00:03.172) 0:00:23.624 ****** 2025-09-19 07:21:51.899369 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-19 07:21:51.899380 | orchestrator | 2025-09-19 07:21:51.899391 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 07:21:51.899402 | orchestrator | Friday 19 September 2025 07:21:02 +0000 (0:00:04.924) 0:00:28.549 ****** 2025-09-19 07:21:51.899413 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:51.899431 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:51.899443 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:51.899454 | orchestrator | 2025-09-19 07:21:51.899465 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-19 07:21:51.899476 | orchestrator | Friday 19 September 2025 07:21:03 +0000 (0:00:00.773) 0:00:29.322 ****** 2025-09-19 07:21:51.899490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.899510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.899531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.899544 | orchestrator | 2025-09-19 07:21:51.899555 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-19 07:21:51.899567 | orchestrator | Friday 19 September 2025 07:21:04 +0000 (0:00:01.490) 0:00:30.812 ****** 2025-09-19 07:21:51.899578 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:51.899588 | orchestrator | 2025-09-19 07:21:51.899600 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-19 07:21:51.899611 | orchestrator | Friday 19 September 2025 07:21:04 +0000 (0:00:00.117) 0:00:30.930 ****** 2025-09-19 07:21:51.899621 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:51.899633 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:51.899644 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:51.899654 | orchestrator | 2025-09-19 07:21:51.899666 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 07:21:51.899683 | orchestrator | Friday 19 September 2025 07:21:05 +0000 (0:00:00.502) 0:00:31.432 ****** 2025-09-19 07:21:51.899694 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:21:51.899705 | orchestrator | 2025-09-19 07:21:51.899716 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-19 07:21:51.899727 | orchestrator | Friday 19 September 2025 07:21:06 +0000 (0:00:00.770) 0:00:32.203 ****** 2025-09-19 07:21:51.899739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.899756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.899775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.899787 | orchestrator | 2025-09-19 07:21:51.899798 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-19 07:21:51.899809 | orchestrator | Friday 19 September 2025 07:21:08 +0000 (0:00:02.128) 0:00:34.332 ****** 2025-09-19 07:21:51.899853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:21:51.899873 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:51.899884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:21:51.899896 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:51.899907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:21:51.899924 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:51.899935 | orchestrator | 2025-09-19 07:21:51.899946 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-19 07:21:51.899958 | orchestrator | Friday 19 September 2025 07:21:09 +0000 (0:00:00.781) 0:00:35.113 ****** 2025-09-19 07:21:51.899976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:21:51.899988 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:51.899999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:21:51.900017 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:51.900028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:21:51.900040 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:51.900051 | orchestrator | 2025-09-19 07:21:51.900062 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-19 07:21:51.900073 | orchestrator | Friday 19 September 2025 07:21:10 +0000 (0:00:01.241) 0:00:36.355 ****** 2025-09-19 07:21:51.900088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.900100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.900119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.900136 | orchestrator | 2025-09-19 07:21:51.900148 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-19 07:21:51.900159 | orchestrator | Friday 19 September 2025 07:21:12 +0000 (0:00:01.770) 0:00:38.125 ****** 2025-09-19 07:21:51.900170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.900182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.900198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.900210 | orchestrator | 2025-09-19 07:21:51.900221 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-19 07:21:51.900232 | orchestrator | Friday 19 September 2025 07:21:14 +0000 (0:00:02.845) 0:00:40.970 ****** 2025-09-19 07:21:51.900249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 07:21:51.900266 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 07:21:51.900278 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 07:21:51.900289 | orchestrator | 2025-09-19 07:21:51.900331 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-19 07:21:51.900345 | orchestrator | Friday 19 September 2025 07:21:16 +0000 (0:00:01.531) 0:00:42.501 ****** 2025-09-19 07:21:51.900356 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:51.900367 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:21:51.900378 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:21:51.900389 | orchestrator | 2025-09-19 07:21:51.900400 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-19 07:21:51.900411 | orchestrator | Friday 19 September 2025 07:21:17 +0000 (0:00:01.305) 0:00:43.806 ****** 2025-09-19 07:21:51.900423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:21:51.900435 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:51.900446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:21:51.900458 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:51.900480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:21:51.900498 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:51.900510 | orchestrator | 2025-09-19 07:21:51.900521 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-19 07:21:51.900532 | orchestrator | Friday 19 September 2025 07:21:18 +0000 (0:00:00.443) 0:00:44.250 ****** 2025-09-19 07:21:51.900553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.900565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.900578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:51.900589 | orchestrator | 2025-09-19 07:21:51.900601 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-19 07:21:51.900612 | orchestrator | Friday 19 September 2025 07:21:19 +0000 (0:00:01.659) 0:00:45.909 ****** 2025-09-19 07:21:51.900623 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:51.900634 | orchestrator | 2025-09-19 07:21:51.900645 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-19 07:21:51.900656 | orchestrator | Friday 19 September 2025 07:21:22 +0000 (0:00:02.333) 0:00:48.243 ****** 2025-09-19 07:21:51.900667 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:51.900678 | orchestrator | 2025-09-19 07:21:51.900689 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-19 07:21:51.900700 | orchestrator | Friday 19 September 2025 07:21:24 +0000 (0:00:02.150) 0:00:50.394 ****** 2025-09-19 07:21:51.900711 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:51.900728 | orchestrator | 2025-09-19 07:21:51.900739 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 07:21:51.900750 | orchestrator | Friday 19 September 2025 07:21:37 +0000 (0:00:13.475) 0:01:03.869 ****** 2025-09-19 07:21:51.900761 | orchestrator | 2025-09-19 07:21:51.900776 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 07:21:51.900788 | orchestrator | Friday 19 September 2025 07:21:37 +0000 (0:00:00.082) 0:01:03.952 ****** 2025-09-19 07:21:51.900799 | orchestrator | 2025-09-19 07:21:51.900810 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 07:21:51.900866 | orchestrator | Friday 19 September 2025 07:21:38 +0000 (0:00:00.170) 0:01:04.122 ****** 2025-09-19 07:21:51.900878 | orchestrator | 2025-09-19 07:21:51.900890 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-19 07:21:51.900901 | orchestrator | Friday 19 September 2025 07:21:38 +0000 (0:00:00.077) 0:01:04.199 ****** 2025-09-19 07:21:51.900912 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:51.900923 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:21:51.900934 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:21:51.900945 | orchestrator | 2025-09-19 07:21:51.900956 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:21:51.900968 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:21:51.900979 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:21:51.900994 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:21:51.901005 | orchestrator | 2025-09-19 07:21:51.901015 | orchestrator | 2025-09-19 07:21:51.901025 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:21:51.901034 | orchestrator | Friday 19 September 2025 07:21:48 +0000 (0:00:10.352) 0:01:14.552 ****** 2025-09-19 07:21:51.901044 | orchestrator | =============================================================================== 2025-09-19 07:21:51.901054 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.48s 2025-09-19 07:21:51.901064 | orchestrator | placement : Restart placement-api container ---------------------------- 10.35s 2025-09-19 07:21:51.901074 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.89s 2025-09-19 07:21:51.901084 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.92s 2025-09-19 07:21:51.901093 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.01s 2025-09-19 07:21:51.901103 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.64s 2025-09-19 07:21:51.901113 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.27s 2025-09-19 07:21:51.901123 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.17s 2025-09-19 07:21:51.901133 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.85s 2025-09-19 07:21:51.901143 | orchestrator | placement : Creating placement databases -------------------------------- 2.33s 2025-09-19 07:21:51.901152 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.15s 2025-09-19 07:21:51.901162 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.13s 2025-09-19 07:21:51.901172 | orchestrator | placement : Copying over config.json files for services ----------------- 1.77s 2025-09-19 07:21:51.901182 | orchestrator | placement : Check placement containers ---------------------------------- 1.66s 2025-09-19 07:21:51.901192 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.53s 2025-09-19 07:21:51.901202 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.49s 2025-09-19 07:21:51.901211 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.31s 2025-09-19 07:21:51.901227 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.24s 2025-09-19 07:21:51.901237 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2025-09-19 07:21:51.901247 | orchestrator | placement : include_tasks ----------------------------------------------- 0.90s 2025-09-19 07:21:51.901341 | orchestrator | 2025-09-19 07:21:51 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:21:51.901354 | orchestrator | 2025-09-19 07:21:51 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:51.901364 | orchestrator | 2025-09-19 07:21:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:54.954521 | orchestrator | 2025-09-19 07:21:54 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:54.956542 | orchestrator | 2025-09-19 07:21:54 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:54.958348 | orchestrator | 2025-09-19 07:21:54 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:21:54.960885 | orchestrator | 2025-09-19 07:21:54 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:54.961071 | orchestrator | 2025-09-19 07:21:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:58.008230 | orchestrator | 2025-09-19 07:21:58 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state STARTED 2025-09-19 07:21:58.009508 | orchestrator | 2025-09-19 07:21:58 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state STARTED 2025-09-19 07:21:58.011730 | orchestrator | 2025-09-19 07:21:58 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:21:58.012998 | orchestrator | 2025-09-19 07:21:58 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:21:58.013246 | orchestrator | 2025-09-19 07:21:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:01.042010 | orchestrator | 2025-09-19 07:22:01.042166 | orchestrator | 2025-09-19 07:22:01 | INFO  | Task 860b9041-97a1-49c7-a87c-4a904d4b4b3f is in state SUCCESS 2025-09-19 07:22:01.043050 | orchestrator | 2025-09-19 07:22:01.043088 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:22:01.043101 | orchestrator | 2025-09-19 07:22:01.043113 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:22:01.043125 | orchestrator | Friday 19 September 2025 07:17:38 +0000 (0:00:00.282) 0:00:00.282 ****** 2025-09-19 07:22:01.043138 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:22:01.043150 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:22:01.043162 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:22:01.043173 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:22:01.043184 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:22:01.043196 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:22:01.043208 | orchestrator | 2025-09-19 07:22:01.043219 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:22:01.043231 | orchestrator | Friday 19 September 2025 07:17:39 +0000 (0:00:00.662) 0:00:00.945 ****** 2025-09-19 07:22:01.043242 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-19 07:22:01.043254 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-19 07:22:01.043265 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-19 07:22:01.043276 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-19 07:22:01.043287 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-19 07:22:01.043299 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-19 07:22:01.043310 | orchestrator | 2025-09-19 07:22:01.043321 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-19 07:22:01.044021 | orchestrator | 2025-09-19 07:22:01.044047 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 07:22:01.044059 | orchestrator | Friday 19 September 2025 07:17:39 +0000 (0:00:00.583) 0:00:01.528 ****** 2025-09-19 07:22:01.044071 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:22:01.044084 | orchestrator | 2025-09-19 07:22:01.044095 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-19 07:22:01.044107 | orchestrator | Friday 19 September 2025 07:17:41 +0000 (0:00:01.268) 0:00:02.797 ****** 2025-09-19 07:22:01.044118 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:22:01.044377 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:22:01.044390 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:22:01.044401 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:22:01.044412 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:22:01.044423 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:22:01.044434 | orchestrator | 2025-09-19 07:22:01.044445 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-19 07:22:01.044457 | orchestrator | Friday 19 September 2025 07:17:42 +0000 (0:00:01.373) 0:00:04.171 ****** 2025-09-19 07:22:01.044468 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:22:01.044479 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:22:01.044490 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:22:01.044502 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:22:01.044512 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:22:01.044523 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:22:01.044534 | orchestrator | 2025-09-19 07:22:01.044546 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-19 07:22:01.044557 | orchestrator | Friday 19 September 2025 07:17:43 +0000 (0:00:01.033) 0:00:05.205 ****** 2025-09-19 07:22:01.044568 | orchestrator | ok: [testbed-node-0] => { 2025-09-19 07:22:01.044580 | orchestrator |  "changed": false, 2025-09-19 07:22:01.044592 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:22:01.044603 | orchestrator | } 2025-09-19 07:22:01.044615 | orchestrator | ok: [testbed-node-1] => { 2025-09-19 07:22:01.044626 | orchestrator |  "changed": false, 2025-09-19 07:22:01.044637 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:22:01.044648 | orchestrator | } 2025-09-19 07:22:01.044659 | orchestrator | ok: [testbed-node-2] => { 2025-09-19 07:22:01.044670 | orchestrator |  "changed": false, 2025-09-19 07:22:01.045424 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:22:01.045439 | orchestrator | } 2025-09-19 07:22:01.045451 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 07:22:01.045462 | orchestrator |  "changed": false, 2025-09-19 07:22:01.045473 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:22:01.045485 | orchestrator | } 2025-09-19 07:22:01.045496 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 07:22:01.045507 | orchestrator |  "changed": false, 2025-09-19 07:22:01.045518 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:22:01.045530 | orchestrator | } 2025-09-19 07:22:01.045541 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 07:22:01.045552 | orchestrator |  "changed": false, 2025-09-19 07:22:01.045563 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:22:01.045575 | orchestrator | } 2025-09-19 07:22:01.045586 | orchestrator | 2025-09-19 07:22:01.045597 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-19 07:22:01.045608 | orchestrator | Friday 19 September 2025 07:17:44 +0000 (0:00:00.631) 0:00:05.836 ****** 2025-09-19 07:22:01.045619 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.045631 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.045642 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.045653 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.045664 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.045675 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.045687 | orchestrator | 2025-09-19 07:22:01.045713 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-19 07:22:01.045725 | orchestrator | Friday 19 September 2025 07:17:44 +0000 (0:00:00.500) 0:00:06.337 ****** 2025-09-19 07:22:01.045750 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-19 07:22:01.045762 | orchestrator | 2025-09-19 07:22:01.045773 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-19 07:22:01.045784 | orchestrator | Friday 19 September 2025 07:17:47 +0000 (0:00:03.162) 0:00:09.499 ****** 2025-09-19 07:22:01.045796 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-19 07:22:01.045808 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-19 07:22:01.045819 | orchestrator | 2025-09-19 07:22:01.045936 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-19 07:22:01.045953 | orchestrator | Friday 19 September 2025 07:17:54 +0000 (0:00:06.827) 0:00:16.326 ****** 2025-09-19 07:22:01.045965 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:22:01.045976 | orchestrator | 2025-09-19 07:22:01.045988 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-19 07:22:01.045999 | orchestrator | Friday 19 September 2025 07:17:58 +0000 (0:00:03.365) 0:00:19.692 ****** 2025-09-19 07:22:01.046010 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:22:01.046067 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-19 07:22:01.046080 | orchestrator | 2025-09-19 07:22:01.046091 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-19 07:22:01.046102 | orchestrator | Friday 19 September 2025 07:18:01 +0000 (0:00:03.839) 0:00:23.531 ****** 2025-09-19 07:22:01.046113 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:22:01.046124 | orchestrator | 2025-09-19 07:22:01.046136 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-19 07:22:01.046147 | orchestrator | Friday 19 September 2025 07:18:05 +0000 (0:00:03.548) 0:00:27.079 ****** 2025-09-19 07:22:01.046158 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-19 07:22:01.046169 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-19 07:22:01.046180 | orchestrator | 2025-09-19 07:22:01.046191 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 07:22:01.046203 | orchestrator | Friday 19 September 2025 07:18:12 +0000 (0:00:07.179) 0:00:34.259 ****** 2025-09-19 07:22:01.046214 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.046225 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.046236 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.046247 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.046258 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.046269 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.046280 | orchestrator | 2025-09-19 07:22:01.046292 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-19 07:22:01.046303 | orchestrator | Friday 19 September 2025 07:18:13 +0000 (0:00:00.800) 0:00:35.059 ****** 2025-09-19 07:22:01.046314 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.046325 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.046336 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.046347 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.046358 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.046369 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.046380 | orchestrator | 2025-09-19 07:22:01.046391 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-19 07:22:01.046402 | orchestrator | Friday 19 September 2025 07:18:15 +0000 (0:00:01.980) 0:00:37.040 ****** 2025-09-19 07:22:01.046413 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:22:01.046425 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:22:01.046436 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:22:01.046457 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:22:01.046468 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:22:01.046479 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:22:01.046491 | orchestrator | 2025-09-19 07:22:01.046502 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 07:22:01.046513 | orchestrator | Friday 19 September 2025 07:18:16 +0000 (0:00:01.046) 0:00:38.087 ****** 2025-09-19 07:22:01.046525 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.046536 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.046547 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.046560 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.046573 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.046586 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.046599 | orchestrator | 2025-09-19 07:22:01.046612 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-19 07:22:01.046625 | orchestrator | Friday 19 September 2025 07:18:18 +0000 (0:00:02.274) 0:00:40.361 ****** 2025-09-19 07:22:01.046643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.046751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.046770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.046783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.046805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.046817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.046829 | orchestrator | 2025-09-19 07:22:01.046879 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-19 07:22:01.046893 | orchestrator | Friday 19 September 2025 07:18:22 +0000 (0:00:03.337) 0:00:43.699 ****** 2025-09-19 07:22:01.046904 | orchestrator | [WARNING]: Skipped 2025-09-19 07:22:01.046916 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-19 07:22:01.046928 | orchestrator | due to this access issue: 2025-09-19 07:22:01.046939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-19 07:22:01.046950 | orchestrator | a directory 2025-09-19 07:22:01.046961 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:22:01.046972 | orchestrator | 2025-09-19 07:22:01.047054 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 07:22:01.047070 | orchestrator | Friday 19 September 2025 07:18:22 +0000 (0:00:00.828) 0:00:44.527 ****** 2025-09-19 07:22:01.047081 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:22:01.047094 | orchestrator | 2025-09-19 07:22:01.047106 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-19 07:22:01.047117 | orchestrator | Friday 19 September 2025 07:18:24 +0000 (0:00:01.362) 0:00:45.890 ****** 2025-09-19 07:22:01.047129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.047151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.047188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.047208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.047291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.047309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.047330 | orchestrator | 2025-09-19 07:22:01.047342 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-19 07:22:01.047354 | orchestrator | Friday 19 September 2025 07:18:27 +0000 (0:00:03.018) 0:00:48.909 ****** 2025-09-19 07:22:01.047366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.047378 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.047391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.047403 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.047487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.047505 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.047517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.047553 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.047566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.047577 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.047589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.047600 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.047612 | orchestrator | 2025-09-19 07:22:01.047628 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-19 07:22:01.047647 | orchestrator | Friday 19 September 2025 07:18:30 +0000 (0:00:03.588) 0:00:52.498 ****** 2025-09-19 07:22:01.047675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.047695 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.047815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.047896 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.047918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.047931 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.047943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.047970 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.047993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.048005 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.048023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.048036 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.048076 | orchestrator | 2025-09-19 07:22:01.048089 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-19 07:22:01.048185 | orchestrator | Friday 19 September 2025 07:18:34 +0000 (0:00:03.742) 0:00:56.241 ****** 2025-09-19 07:22:01.048200 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.048212 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.048223 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.048234 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.048246 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.048257 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.048268 | orchestrator | 2025-09-19 07:22:01.048279 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-19 07:22:01.048291 | orchestrator | Friday 19 September 2025 07:18:36 +0000 (0:00:02.119) 0:00:58.360 ****** 2025-09-19 07:22:01.048302 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.048313 | orchestrator | 2025-09-19 07:22:01.048325 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-19 07:22:01.048336 | orchestrator | Friday 19 September 2025 07:18:36 +0000 (0:00:00.106) 0:00:58.467 ****** 2025-09-19 07:22:01.048347 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.048358 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.048369 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.048380 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.048391 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.048402 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.048413 | orchestrator | 2025-09-19 07:22:01.048424 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-19 07:22:01.048436 | orchestrator | Friday 19 September 2025 07:18:37 +0000 (0:00:00.725) 0:00:59.192 ****** 2025-09-19 07:22:01.048448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.048460 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.048472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.048491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.048512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.048523 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.048603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.048619 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.048631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.048643 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.048654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.048666 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.048677 | orchestrator | 2025-09-19 07:22:01.048688 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-19 07:22:01.048700 | orchestrator | Friday 19 September 2025 07:18:41 +0000 (0:00:04.195) 0:01:03.388 ****** 2025-09-19 07:22:01.048711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.048804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.048822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.048834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.048925 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.048939 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.048964 | orchestrator | 2025-09-19 07:22:01.048976 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-19 07:22:01.048987 | orchestrator | Friday 19 September 2025 07:18:48 +0000 (0:00:06.551) 0:01:09.939 ****** 2025-09-19 07:22:01.049077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.049092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.049103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.049113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.049131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.049206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.049221 | orchestrator | 2025-09-19 07:22:01.049231 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-19 07:22:01.049241 | orchestrator | Friday 19 September 2025 07:18:54 +0000 (0:00:05.873) 0:01:15.813 ****** 2025-09-19 07:22:01.049252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.049262 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.049273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.049283 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.049293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.049310 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.049325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.049412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.049428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.049439 | orchestrator | 2025-09-19 07:22:01.049449 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-19 07:22:01.049459 | orchestrator | Friday 19 September 2025 07:18:57 +0000 (0:00:03.100) 0:01:18.914 ****** 2025-09-19 07:22:01.049469 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.049479 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.049490 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.049500 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:22:01.049509 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:22:01.049526 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.049536 | orchestrator | 2025-09-19 07:22:01.049547 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-19 07:22:01.049557 | orchestrator | Friday 19 September 2025 07:19:00 +0000 (0:00:03.200) 0:01:22.114 ****** 2025-09-19 07:22:01.049567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.049578 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.049588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.049599 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.049687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.049703 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.049714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.049725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.049749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.049760 | orchestrator | 2025-09-19 07:22:01.049770 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-19 07:22:01.049780 | orchestrator | Friday 19 September 2025 07:19:04 +0000 (0:00:04.307) 0:01:26.421 ****** 2025-09-19 07:22:01.049790 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.049800 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.049811 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.049821 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.049830 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.049856 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.049867 | orchestrator | 2025-09-19 07:22:01.049878 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-19 07:22:01.049888 | orchestrator | Friday 19 September 2025 07:19:06 +0000 (0:00:01.766) 0:01:28.188 ****** 2025-09-19 07:22:01.049898 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.049912 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.049922 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.049932 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.049942 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.049952 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.049962 | orchestrator | 2025-09-19 07:22:01.049972 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-19 07:22:01.049982 | orchestrator | Friday 19 September 2025 07:19:08 +0000 (0:00:02.269) 0:01:30.457 ****** 2025-09-19 07:22:01.049992 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.050002 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.050012 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.050120 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.050136 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.050146 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.050157 | orchestrator | 2025-09-19 07:22:01.050167 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-19 07:22:01.050178 | orchestrator | Friday 19 September 2025 07:19:10 +0000 (0:00:01.815) 0:01:32.273 ****** 2025-09-19 07:22:01.050188 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.050199 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.050209 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.050219 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.050238 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.050248 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.050259 | orchestrator | 2025-09-19 07:22:01.050269 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-19 07:22:01.050280 | orchestrator | Friday 19 September 2025 07:19:13 +0000 (0:00:03.108) 0:01:35.381 ****** 2025-09-19 07:22:01.050290 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.050301 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.050311 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.050322 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.050332 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.050342 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.050352 | orchestrator | 2025-09-19 07:22:01.050363 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-19 07:22:01.050374 | orchestrator | Friday 19 September 2025 07:19:15 +0000 (0:00:01.899) 0:01:37.281 ****** 2025-09-19 07:22:01.050384 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.050409 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.050420 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.050430 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.050440 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.050449 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.050460 | orchestrator | 2025-09-19 07:22:01.050470 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-19 07:22:01.050480 | orchestrator | Friday 19 September 2025 07:19:17 +0000 (0:00:02.159) 0:01:39.440 ****** 2025-09-19 07:22:01.050490 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:22:01.050500 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.050510 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:22:01.050520 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.050530 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:22:01.050540 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.050550 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:22:01.050560 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.050570 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:22:01.050580 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.050590 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:22:01.050599 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.050609 | orchestrator | 2025-09-19 07:22:01.050619 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-19 07:22:01.050629 | orchestrator | Friday 19 September 2025 07:19:19 +0000 (0:00:02.033) 0:01:41.474 ****** 2025-09-19 07:22:01.050640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.050651 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.050703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.050715 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.050726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.050736 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.050746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.050757 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.050767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.050777 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.050788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.050805 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.050815 | orchestrator | 2025-09-19 07:22:01.050829 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-19 07:22:01.050840 | orchestrator | Friday 19 September 2025 07:19:22 +0000 (0:00:02.306) 0:01:43.780 ****** 2025-09-19 07:22:01.050934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.050947 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.050957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.050968 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.050978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.050988 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.050999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.051017 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.051043 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.051082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.051094 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.051104 | orchestrator | 2025-09-19 07:22:01.051114 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-19 07:22:01.051122 | orchestrator | Friday 19 September 2025 07:19:25 +0000 (0:00:03.254) 0:01:47.035 ****** 2025-09-19 07:22:01.051131 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051139 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051147 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051155 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.051163 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.051171 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.051180 | orchestrator | 2025-09-19 07:22:01.051188 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-19 07:22:01.051196 | orchestrator | Friday 19 September 2025 07:19:28 +0000 (0:00:02.793) 0:01:49.829 ****** 2025-09-19 07:22:01.051204 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051212 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051220 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051228 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:22:01.051237 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:22:01.051245 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:22:01.051253 | orchestrator | 2025-09-19 07:22:01.051261 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-09-19 07:22:01.051270 | orchestrator | Friday 19 September 2025 07:19:32 +0000 (0:00:04.234) 0:01:54.063 ****** 2025-09-19 07:22:01.051278 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051286 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051294 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051302 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.051310 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.051324 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.051332 | orchestrator | 2025-09-19 07:22:01.051341 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-19 07:22:01.051349 | orchestrator | Friday 19 September 2025 07:19:35 +0000 (0:00:02.918) 0:01:56.982 ****** 2025-09-19 07:22:01.051357 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051366 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051374 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.051382 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051390 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.051398 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.051406 | orchestrator | 2025-09-19 07:22:01.051414 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-19 07:22:01.051423 | orchestrator | Friday 19 September 2025 07:19:38 +0000 (0:00:02.827) 0:01:59.809 ****** 2025-09-19 07:22:01.051431 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051439 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051447 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.051455 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051463 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.051471 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.051479 | orchestrator | 2025-09-19 07:22:01.051488 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-19 07:22:01.051496 | orchestrator | Friday 19 September 2025 07:19:40 +0000 (0:00:02.649) 0:02:02.459 ****** 2025-09-19 07:22:01.051504 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051520 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.051528 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.051536 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051545 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.051553 | orchestrator | 2025-09-19 07:22:01.051561 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-19 07:22:01.051569 | orchestrator | Friday 19 September 2025 07:19:44 +0000 (0:00:03.324) 0:02:05.783 ****** 2025-09-19 07:22:01.051578 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051586 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051594 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051602 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.051610 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.051618 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.051626 | orchestrator | 2025-09-19 07:22:01.051635 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-19 07:22:01.051647 | orchestrator | Friday 19 September 2025 07:19:47 +0000 (0:00:03.258) 0:02:09.042 ****** 2025-09-19 07:22:01.051655 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051663 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051671 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051679 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.051688 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.051696 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.051704 | orchestrator | 2025-09-19 07:22:01.051712 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-19 07:22:01.051720 | orchestrator | Friday 19 September 2025 07:19:49 +0000 (0:00:02.340) 0:02:11.383 ****** 2025-09-19 07:22:01.051728 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051759 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051768 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051777 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.051785 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.051792 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.051800 | orchestrator | 2025-09-19 07:22:01.051809 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-19 07:22:01.051824 | orchestrator | Friday 19 September 2025 07:19:52 +0000 (0:00:03.044) 0:02:14.427 ****** 2025-09-19 07:22:01.051832 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051853 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051862 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051870 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.051878 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.051886 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.051894 | orchestrator | 2025-09-19 07:22:01.051902 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-19 07:22:01.051910 | orchestrator | Friday 19 September 2025 07:19:56 +0000 (0:00:04.105) 0:02:18.532 ****** 2025-09-19 07:22:01.051918 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:22:01.051927 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.051935 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:22:01.051943 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.051951 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:22:01.051959 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.051967 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:22:01.051975 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.051983 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:22:01.051992 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.052000 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:22:01.052008 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.052016 | orchestrator | 2025-09-19 07:22:01.052024 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-19 07:22:01.052032 | orchestrator | Friday 19 September 2025 07:20:01 +0000 (0:00:04.586) 0:02:23.119 ****** 2025-09-19 07:22:01.052041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.052050 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.052063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.052077 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.052110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:22:01.052120 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.052128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.052136 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.052145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.052154 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.052162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:22:01.052170 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.052179 | orchestrator | 2025-09-19 07:22:01.052187 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-19 07:22:01.052195 | orchestrator | Friday 19 September 2025 07:20:04 +0000 (0:00:03.388) 0:02:26.507 ****** 2025-09-19 07:22:01.052207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.052249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.052259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.052268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.052277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:22:01.052295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:22:01.052304 | orchestrator | 2025-09-19 07:22:01.052312 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 07:22:01.052341 | orchestrator | Friday 19 September 2025 07:20:08 +0000 (0:00:03.843) 0:02:30.351 ****** 2025-09-19 07:22:01.052351 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.052359 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.052367 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.052375 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:22:01.052383 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:22:01.052391 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:22:01.052399 | orchestrator | 2025-09-19 07:22:01.052408 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-19 07:22:01.052416 | orchestrator | Friday 19 September 2025 07:20:09 +0000 (0:00:00.621) 0:02:30.972 ****** 2025-09-19 07:22:01.052424 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.052432 | orchestrator | 2025-09-19 07:22:01.052441 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-19 07:22:01.052449 | orchestrator | Friday 19 September 2025 07:20:11 +0000 (0:00:02.261) 0:02:33.233 ****** 2025-09-19 07:22:01.052457 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.052465 | orchestrator | 2025-09-19 07:22:01.052473 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-19 07:22:01.052481 | orchestrator | Friday 19 September 2025 07:20:14 +0000 (0:00:02.346) 0:02:35.580 ****** 2025-09-19 07:22:01.052490 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.052498 | orchestrator | 2025-09-19 07:22:01.052506 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:22:01.052514 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:44.252) 0:03:19.833 ****** 2025-09-19 07:22:01.052522 | orchestrator | 2025-09-19 07:22:01.052530 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:22:01.052538 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.062) 0:03:19.895 ****** 2025-09-19 07:22:01.052546 | orchestrator | 2025-09-19 07:22:01.052555 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:22:01.052563 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.059) 0:03:19.955 ****** 2025-09-19 07:22:01.052571 | orchestrator | 2025-09-19 07:22:01.052579 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:22:01.052587 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.060) 0:03:20.015 ****** 2025-09-19 07:22:01.052595 | orchestrator | 2025-09-19 07:22:01.052603 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:22:01.052611 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.174) 0:03:20.190 ****** 2025-09-19 07:22:01.052620 | orchestrator | 2025-09-19 07:22:01.052628 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:22:01.052636 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.076) 0:03:20.266 ****** 2025-09-19 07:22:01.052649 | orchestrator | 2025-09-19 07:22:01.052657 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-19 07:22:01.052665 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.126) 0:03:20.392 ****** 2025-09-19 07:22:01.052673 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.052681 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:22:01.052689 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:22:01.052697 | orchestrator | 2025-09-19 07:22:01.052706 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-19 07:22:01.052714 | orchestrator | Friday 19 September 2025 07:21:31 +0000 (0:00:32.597) 0:03:52.990 ****** 2025-09-19 07:22:01.052722 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:22:01.052730 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:22:01.052738 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:22:01.052746 | orchestrator | 2025-09-19 07:22:01.052754 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:22:01.052763 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 07:22:01.052772 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 07:22:01.052780 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 07:22:01.052788 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-19 07:22:01.052797 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-19 07:22:01.052805 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-19 07:22:01.052813 | orchestrator | 2025-09-19 07:22:01.052821 | orchestrator | 2025-09-19 07:22:01.052829 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:22:01.052853 | orchestrator | Friday 19 September 2025 07:21:59 +0000 (0:00:28.345) 0:04:21.335 ****** 2025-09-19 07:22:01.052862 | orchestrator | =============================================================================== 2025-09-19 07:22:01.052870 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.25s 2025-09-19 07:22:01.052879 | orchestrator | neutron : Restart neutron-server container ----------------------------- 32.60s 2025-09-19 07:22:01.052887 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 28.35s 2025-09-19 07:22:01.052895 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.18s 2025-09-19 07:22:01.052927 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.83s 2025-09-19 07:22:01.052936 | orchestrator | neutron : Copying over config.json files for services ------------------- 6.55s 2025-09-19 07:22:01.052944 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.87s 2025-09-19 07:22:01.052952 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.59s 2025-09-19 07:22:01.052960 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.31s 2025-09-19 07:22:01.052968 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.23s 2025-09-19 07:22:01.052976 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.20s 2025-09-19 07:22:01.052985 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 4.11s 2025-09-19 07:22:01.052993 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.84s 2025-09-19 07:22:01.053001 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.84s 2025-09-19 07:22:01.053017 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.74s 2025-09-19 07:22:01.053025 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.59s 2025-09-19 07:22:01.053033 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.55s 2025-09-19 07:22:01.053041 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.39s 2025-09-19 07:22:01.053049 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.37s 2025-09-19 07:22:01.053057 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.34s 2025-09-19 07:22:01.053065 | orchestrator | 2025-09-19 07:22:01.053073 | orchestrator | 2025-09-19 07:22:01.053081 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:22:01.053090 | orchestrator | 2025-09-19 07:22:01.053098 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:22:01.053106 | orchestrator | Friday 19 September 2025 07:19:03 +0000 (0:00:00.348) 0:00:00.348 ****** 2025-09-19 07:22:01.053114 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:22:01.053122 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:22:01.053130 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:22:01.053138 | orchestrator | 2025-09-19 07:22:01.053146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:22:01.053155 | orchestrator | Friday 19 September 2025 07:19:03 +0000 (0:00:00.308) 0:00:00.657 ****** 2025-09-19 07:22:01.053163 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-19 07:22:01.053171 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-19 07:22:01.053179 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-19 07:22:01.053187 | orchestrator | 2025-09-19 07:22:01.053195 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-19 07:22:01.053203 | orchestrator | 2025-09-19 07:22:01.053211 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 07:22:01.053219 | orchestrator | Friday 19 September 2025 07:19:04 +0000 (0:00:00.348) 0:00:01.005 ****** 2025-09-19 07:22:01.053228 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:22:01.053236 | orchestrator | 2025-09-19 07:22:01.053244 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-19 07:22:01.053252 | orchestrator | Friday 19 September 2025 07:19:04 +0000 (0:00:00.491) 0:00:01.497 ****** 2025-09-19 07:22:01.053260 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-19 07:22:01.053268 | orchestrator | 2025-09-19 07:22:01.053276 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-19 07:22:01.053284 | orchestrator | Friday 19 September 2025 07:19:08 +0000 (0:00:03.474) 0:00:04.971 ****** 2025-09-19 07:22:01.053292 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-19 07:22:01.053300 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-19 07:22:01.053309 | orchestrator | 2025-09-19 07:22:01.053317 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-19 07:22:01.053325 | orchestrator | Friday 19 September 2025 07:19:14 +0000 (0:00:06.889) 0:00:11.861 ****** 2025-09-19 07:22:01.053333 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:22:01.053341 | orchestrator | 2025-09-19 07:22:01.053349 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-19 07:22:01.053357 | orchestrator | Friday 19 September 2025 07:19:18 +0000 (0:00:03.601) 0:00:15.462 ****** 2025-09-19 07:22:01.053365 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:22:01.053373 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-19 07:22:01.053381 | orchestrator | 2025-09-19 07:22:01.053389 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-19 07:22:01.053404 | orchestrator | Friday 19 September 2025 07:19:22 +0000 (0:00:04.275) 0:00:19.738 ****** 2025-09-19 07:22:01.053412 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:22:01.053420 | orchestrator | 2025-09-19 07:22:01.053432 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-19 07:22:01.053440 | orchestrator | Friday 19 September 2025 07:19:26 +0000 (0:00:03.839) 0:00:23.577 ****** 2025-09-19 07:22:01.053449 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-19 07:22:01.053457 | orchestrator | 2025-09-19 07:22:01.053465 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-19 07:22:01.053473 | orchestrator | Friday 19 September 2025 07:19:30 +0000 (0:00:04.093) 0:00:27.671 ****** 2025-09-19 07:22:01.053505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.053517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.053526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.053535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.053741 | orchestrator | 2025-09-19 07:22:01.053749 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-19 07:22:01.053757 | orchestrator | Friday 19 September 2025 07:19:34 +0000 (0:00:03.505) 0:00:31.176 ****** 2025-09-19 07:22:01.053765 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.053773 | orchestrator | 2025-09-19 07:22:01.053781 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-19 07:22:01.053790 | orchestrator | Friday 19 September 2025 07:19:34 +0000 (0:00:00.157) 0:00:31.334 ****** 2025-09-19 07:22:01.053798 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.053826 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.053836 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.053880 | orchestrator | 2025-09-19 07:22:01.053890 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 07:22:01.053898 | orchestrator | Friday 19 September 2025 07:19:34 +0000 (0:00:00.530) 0:00:31.864 ****** 2025-09-19 07:22:01.053906 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:22:01.053914 | orchestrator | 2025-09-19 07:22:01.053922 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-19 07:22:01.053931 | orchestrator | Friday 19 September 2025 07:19:36 +0000 (0:00:01.055) 0:00:32.920 ****** 2025-09-19 07:22:01.053939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.053948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.053963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.053978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054189 | orchestrator | 2025-09-19 07:22:01.054196 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-19 07:22:01.054203 | orchestrator | Friday 19 September 2025 07:19:43 +0000 (0:00:07.313) 0:00:40.233 ****** 2025-09-19 07:22:01.054210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.054218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:22:01.054229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054280 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.054288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.054295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:22:01.054307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054358 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.054365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.054372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:22:01.054385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054435 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.054443 | orchestrator | 2025-09-19 07:22:01.054450 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-19 07:22:01.054457 | orchestrator | Friday 19 September 2025 07:19:45 +0000 (0:00:01.733) 0:00:41.966 ****** 2025-09-19 07:22:01.054464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.054476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:22:01.054483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054535 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.054542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.054554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:22:01.054561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054612 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.054619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.054633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:22:01.054640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.054691 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.054698 | orchestrator | 2025-09-19 07:22:01.054705 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-19 07:22:01.054712 | orchestrator | Friday 19 September 2025 07:19:47 +0000 (0:00:02.343) 0:00:44.310 ****** 2025-09-19 07:22:01.054724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.054731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.054738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.054746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.054933 | orchestrator | 2025-09-19 07:22:01.054940 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-19 07:22:01.054966 | orchestrator | Friday 19 September 2025 07:19:53 +0000 (0:00:06.247) 0:00:50.558 ****** 2025-09-19 07:22:01.054974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.054981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.054989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.054996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055167 | orchestrator | 2025-09-19 07:22:01.055178 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-19 07:22:01.055191 | orchestrator | Friday 19 September 2025 07:20:15 +0000 (0:00:21.854) 0:01:12.413 ****** 2025-09-19 07:22:01.055201 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 07:22:01.055217 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 07:22:01.055228 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 07:22:01.055238 | orchestrator | 2025-09-19 07:22:01.055248 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-19 07:22:01.055259 | orchestrator | Friday 19 September 2025 07:20:19 +0000 (0:00:04.421) 0:01:16.834 ****** 2025-09-19 07:22:01.055271 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 07:22:01.055283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 07:22:01.055294 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 07:22:01.055304 | orchestrator | 2025-09-19 07:22:01.055312 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-19 07:22:01.055324 | orchestrator | Friday 19 September 2025 07:20:22 +0000 (0:00:02.561) 0:01:19.396 ****** 2025-09-19 07:22:01.055336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.055348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.055360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.055390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055596 | orchestrator | 2025-09-19 07:22:01.055607 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-19 07:22:01.055618 | orchestrator | Friday 19 September 2025 07:20:25 +0000 (0:00:02.871) 0:01:22.267 ****** 2025-09-19 07:22:01.055630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.055644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.055662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.055679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.055865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.055913 | orchestrator | 2025-09-19 07:22:01.055925 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 07:22:01.055937 | orchestrator | Friday 19 September 2025 07:20:27 +0000 (0:00:02.539) 0:01:24.807 ****** 2025-09-19 07:22:01.055949 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.055960 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.055972 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.055983 | orchestrator | 2025-09-19 07:22:01.055994 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-19 07:22:01.056005 | orchestrator | Friday 19 September 2025 07:20:29 +0000 (0:00:01.262) 0:01:26.069 ****** 2025-09-19 07:22:01.056016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.056035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:22:01.056043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056082 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.056089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.056100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:22:01.056108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056146 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.056153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:22:01.056165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:22:01.056172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:22:01.056208 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.056215 | orchestrator | 2025-09-19 07:22:01.056222 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-19 07:22:01.056229 | orchestrator | Friday 19 September 2025 07:20:30 +0000 (0:00:01.355) 0:01:27.425 ****** 2025-09-19 07:22:01.056242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.056249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.056256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:22:01.056267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:22:01.056410 | orchestrator | 2025-09-19 07:22:01.056417 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 07:22:01.056424 | orchestrator | Friday 19 September 2025 07:20:35 +0000 (0:00:04.892) 0:01:32.318 ****** 2025-09-19 07:22:01.056431 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:22:01.056438 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:22:01.056445 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:22:01.056452 | orchestrator | 2025-09-19 07:22:01.056458 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-19 07:22:01.056465 | orchestrator | Friday 19 September 2025 07:20:36 +0000 (0:00:00.733) 0:01:33.051 ****** 2025-09-19 07:22:01.056472 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-19 07:22:01.056479 | orchestrator | 2025-09-19 07:22:01.056486 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-19 07:22:01.056493 | orchestrator | Friday 19 September 2025 07:20:38 +0000 (0:00:02.348) 0:01:35.400 ****** 2025-09-19 07:22:01.056500 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:22:01.056507 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-19 07:22:01.056514 | orchestrator | 2025-09-19 07:22:01.056520 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-19 07:22:01.056527 | orchestrator | Friday 19 September 2025 07:20:41 +0000 (0:00:02.656) 0:01:38.056 ****** 2025-09-19 07:22:01.056534 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.056543 | orchestrator | 2025-09-19 07:22:01.056554 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 07:22:01.056570 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:17.207) 0:01:55.264 ****** 2025-09-19 07:22:01.056584 | orchestrator | 2025-09-19 07:22:01.056594 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 07:22:01.056605 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.060) 0:01:55.325 ****** 2025-09-19 07:22:01.056617 | orchestrator | 2025-09-19 07:22:01.056627 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 07:22:01.056638 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.058) 0:01:55.384 ****** 2025-09-19 07:22:01.056648 | orchestrator | 2025-09-19 07:22:01.056658 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-19 07:22:01.056667 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.064) 0:01:55.448 ****** 2025-09-19 07:22:01.056677 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.056687 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:22:01.056698 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:22:01.056709 | orchestrator | 2025-09-19 07:22:01.056720 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-19 07:22:01.056731 | orchestrator | Friday 19 September 2025 07:21:09 +0000 (0:00:10.865) 0:02:06.313 ****** 2025-09-19 07:22:01.056741 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:22:01.056753 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:22:01.056765 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.056775 | orchestrator | 2025-09-19 07:22:01.056788 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-19 07:22:01.056797 | orchestrator | Friday 19 September 2025 07:21:18 +0000 (0:00:08.947) 0:02:15.261 ****** 2025-09-19 07:22:01.056804 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.056810 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:22:01.056817 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:22:01.056824 | orchestrator | 2025-09-19 07:22:01.056831 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-19 07:22:01.056838 | orchestrator | Friday 19 September 2025 07:21:29 +0000 (0:00:11.513) 0:02:26.775 ****** 2025-09-19 07:22:01.056895 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.056903 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:22:01.056910 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:22:01.056917 | orchestrator | 2025-09-19 07:22:01.056924 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-19 07:22:01.056931 | orchestrator | Friday 19 September 2025 07:21:40 +0000 (0:00:10.149) 0:02:36.924 ****** 2025-09-19 07:22:01.056938 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.056945 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:22:01.056952 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:22:01.056958 | orchestrator | 2025-09-19 07:22:01.056965 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-19 07:22:01.056972 | orchestrator | Friday 19 September 2025 07:21:45 +0000 (0:00:05.326) 0:02:42.251 ****** 2025-09-19 07:22:01.056979 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.056986 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:22:01.056993 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:22:01.057000 | orchestrator | 2025-09-19 07:22:01.057007 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-19 07:22:01.057014 | orchestrator | Friday 19 September 2025 07:21:51 +0000 (0:00:06.067) 0:02:48.318 ****** 2025-09-19 07:22:01.057025 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:22:01.057032 | orchestrator | 2025-09-19 07:22:01.057039 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:22:01.057047 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:22:01.057054 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:22:01.057068 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:22:01.057075 | orchestrator | 2025-09-19 07:22:01.057082 | orchestrator | 2025-09-19 07:22:01.057089 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:22:01.057096 | orchestrator | Friday 19 September 2025 07:21:58 +0000 (0:00:06.738) 0:02:55.057 ****** 2025-09-19 07:22:01.057103 | orchestrator | =============================================================================== 2025-09-19 07:22:01.057110 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.85s 2025-09-19 07:22:01.057117 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.21s 2025-09-19 07:22:01.057123 | orchestrator | designate : Restart designate-central container ------------------------ 11.51s 2025-09-19 07:22:01.057130 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.87s 2025-09-19 07:22:01.057137 | orchestrator | designate : Restart designate-producer container ----------------------- 10.15s 2025-09-19 07:22:01.057144 | orchestrator | designate : Restart designate-api container ----------------------------- 8.95s 2025-09-19 07:22:01.057151 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.31s 2025-09-19 07:22:01.057158 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.89s 2025-09-19 07:22:01.057164 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.74s 2025-09-19 07:22:01.057170 | orchestrator | designate : Copying over config.json files for services ----------------- 6.25s 2025-09-19 07:22:01.057177 | orchestrator | designate : Restart designate-worker container -------------------------- 6.07s 2025-09-19 07:22:01.057183 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.33s 2025-09-19 07:22:01.057189 | orchestrator | designate : Check designate containers ---------------------------------- 4.89s 2025-09-19 07:22:01.057196 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.42s 2025-09-19 07:22:01.057202 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.28s 2025-09-19 07:22:01.057213 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.09s 2025-09-19 07:22:01.057219 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.84s 2025-09-19 07:22:01.057226 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.60s 2025-09-19 07:22:01.057232 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.51s 2025-09-19 07:22:01.057239 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.47s 2025-09-19 07:22:01.057245 | orchestrator | 2025-09-19 07:22:01 | INFO  | Task 777b76af-5888-4a46-a65f-efd3d1ae353b is in state SUCCESS 2025-09-19 07:22:01.057251 | orchestrator | 2025-09-19 07:22:01 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:01.057258 | orchestrator | 2025-09-19 07:22:01 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:01.057264 | orchestrator | 2025-09-19 07:22:01 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:01.057271 | orchestrator | 2025-09-19 07:22:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:04.082135 | orchestrator | 2025-09-19 07:22:04 | INFO  | Task b790faa0-ce7b-40be-b9f3-a3994bc94975 is in state STARTED 2025-09-19 07:22:04.083815 | orchestrator | 2025-09-19 07:22:04 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:04.084742 | orchestrator | 2025-09-19 07:22:04 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:04.086766 | orchestrator | 2025-09-19 07:22:04 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:04.087259 | orchestrator | 2025-09-19 07:22:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:07.129025 | orchestrator | 2025-09-19 07:22:07 | INFO  | Task b790faa0-ce7b-40be-b9f3-a3994bc94975 is in state SUCCESS 2025-09-19 07:22:07.131587 | orchestrator | 2025-09-19 07:22:07 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:07.134341 | orchestrator | 2025-09-19 07:22:07 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:07.136225 | orchestrator | 2025-09-19 07:22:07 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:07.136316 | orchestrator | 2025-09-19 07:22:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:10.165119 | orchestrator | 2025-09-19 07:22:10 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:10.166999 | orchestrator | 2025-09-19 07:22:10 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:10.167830 | orchestrator | 2025-09-19 07:22:10 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:10.168627 | orchestrator | 2025-09-19 07:22:10 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:10.168665 | orchestrator | 2025-09-19 07:22:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:13.201555 | orchestrator | 2025-09-19 07:22:13 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:13.203054 | orchestrator | 2025-09-19 07:22:13 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:13.205950 | orchestrator | 2025-09-19 07:22:13 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:13.207379 | orchestrator | 2025-09-19 07:22:13 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:13.207568 | orchestrator | 2025-09-19 07:22:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:16.243836 | orchestrator | 2025-09-19 07:22:16 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:16.245553 | orchestrator | 2025-09-19 07:22:16 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:16.248856 | orchestrator | 2025-09-19 07:22:16 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:16.250131 | orchestrator | 2025-09-19 07:22:16 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:16.250750 | orchestrator | 2025-09-19 07:22:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:19.293777 | orchestrator | 2025-09-19 07:22:19 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:19.294983 | orchestrator | 2025-09-19 07:22:19 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:19.295806 | orchestrator | 2025-09-19 07:22:19 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:19.297804 | orchestrator | 2025-09-19 07:22:19 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:19.297863 | orchestrator | 2025-09-19 07:22:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:22.339181 | orchestrator | 2025-09-19 07:22:22 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:22.340549 | orchestrator | 2025-09-19 07:22:22 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:22.342530 | orchestrator | 2025-09-19 07:22:22 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:22.343690 | orchestrator | 2025-09-19 07:22:22 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:22.343737 | orchestrator | 2025-09-19 07:22:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:25.384698 | orchestrator | 2025-09-19 07:22:25 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:25.386514 | orchestrator | 2025-09-19 07:22:25 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:25.388646 | orchestrator | 2025-09-19 07:22:25 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:25.390161 | orchestrator | 2025-09-19 07:22:25 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:25.390345 | orchestrator | 2025-09-19 07:22:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:28.432224 | orchestrator | 2025-09-19 07:22:28 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:28.433848 | orchestrator | 2025-09-19 07:22:28 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:28.435234 | orchestrator | 2025-09-19 07:22:28 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:28.437137 | orchestrator | 2025-09-19 07:22:28 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:28.437197 | orchestrator | 2025-09-19 07:22:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:31.527864 | orchestrator | 2025-09-19 07:22:31 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:31.532149 | orchestrator | 2025-09-19 07:22:31 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:31.533586 | orchestrator | 2025-09-19 07:22:31 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:31.534912 | orchestrator | 2025-09-19 07:22:31 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:31.534960 | orchestrator | 2025-09-19 07:22:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:34.575904 | orchestrator | 2025-09-19 07:22:34 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:34.577876 | orchestrator | 2025-09-19 07:22:34 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:34.579931 | orchestrator | 2025-09-19 07:22:34 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:34.582598 | orchestrator | 2025-09-19 07:22:34 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:34.582629 | orchestrator | 2025-09-19 07:22:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:37.622705 | orchestrator | 2025-09-19 07:22:37 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:37.624177 | orchestrator | 2025-09-19 07:22:37 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:37.626096 | orchestrator | 2025-09-19 07:22:37 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:37.627745 | orchestrator | 2025-09-19 07:22:37 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:37.628007 | orchestrator | 2025-09-19 07:22:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:40.686434 | orchestrator | 2025-09-19 07:22:40 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:40.687071 | orchestrator | 2025-09-19 07:22:40 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:40.689242 | orchestrator | 2025-09-19 07:22:40 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:40.690252 | orchestrator | 2025-09-19 07:22:40 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:40.690356 | orchestrator | 2025-09-19 07:22:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:43.731945 | orchestrator | 2025-09-19 07:22:43 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:43.732530 | orchestrator | 2025-09-19 07:22:43 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:43.733549 | orchestrator | 2025-09-19 07:22:43 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:43.735725 | orchestrator | 2025-09-19 07:22:43 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:43.735752 | orchestrator | 2025-09-19 07:22:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:46.787488 | orchestrator | 2025-09-19 07:22:46 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:46.788089 | orchestrator | 2025-09-19 07:22:46 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:46.791322 | orchestrator | 2025-09-19 07:22:46 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:46.791347 | orchestrator | 2025-09-19 07:22:46 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:46.791356 | orchestrator | 2025-09-19 07:22:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:49.833389 | orchestrator | 2025-09-19 07:22:49 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:49.836128 | orchestrator | 2025-09-19 07:22:49 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:49.836752 | orchestrator | 2025-09-19 07:22:49 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:49.837782 | orchestrator | 2025-09-19 07:22:49 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:49.838099 | orchestrator | 2025-09-19 07:22:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:52.885369 | orchestrator | 2025-09-19 07:22:52 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:52.888167 | orchestrator | 2025-09-19 07:22:52 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:52.890349 | orchestrator | 2025-09-19 07:22:52 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:52.892685 | orchestrator | 2025-09-19 07:22:52 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:52.893316 | orchestrator | 2025-09-19 07:22:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:55.933039 | orchestrator | 2025-09-19 07:22:55 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:55.934197 | orchestrator | 2025-09-19 07:22:55 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:55.935581 | orchestrator | 2025-09-19 07:22:55 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:55.936986 | orchestrator | 2025-09-19 07:22:55 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:55.937030 | orchestrator | 2025-09-19 07:22:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:58.978771 | orchestrator | 2025-09-19 07:22:58 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:22:58.980042 | orchestrator | 2025-09-19 07:22:58 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:22:58.981051 | orchestrator | 2025-09-19 07:22:58 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:22:58.982570 | orchestrator | 2025-09-19 07:22:58 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:22:58.982890 | orchestrator | 2025-09-19 07:22:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:02.022517 | orchestrator | 2025-09-19 07:23:02 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:02.023506 | orchestrator | 2025-09-19 07:23:02 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:02.023867 | orchestrator | 2025-09-19 07:23:02 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:02.025715 | orchestrator | 2025-09-19 07:23:02 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:02.025743 | orchestrator | 2025-09-19 07:23:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:05.062437 | orchestrator | 2025-09-19 07:23:05 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:05.064345 | orchestrator | 2025-09-19 07:23:05 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:05.066354 | orchestrator | 2025-09-19 07:23:05 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:05.068053 | orchestrator | 2025-09-19 07:23:05 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:05.068082 | orchestrator | 2025-09-19 07:23:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:08.114240 | orchestrator | 2025-09-19 07:23:08 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:08.115920 | orchestrator | 2025-09-19 07:23:08 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:08.117554 | orchestrator | 2025-09-19 07:23:08 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:08.119074 | orchestrator | 2025-09-19 07:23:08 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:08.119097 | orchestrator | 2025-09-19 07:23:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:11.159190 | orchestrator | 2025-09-19 07:23:11 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:11.159354 | orchestrator | 2025-09-19 07:23:11 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:11.161098 | orchestrator | 2025-09-19 07:23:11 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:11.162605 | orchestrator | 2025-09-19 07:23:11 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:11.162646 | orchestrator | 2025-09-19 07:23:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:14.197394 | orchestrator | 2025-09-19 07:23:14 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:14.199971 | orchestrator | 2025-09-19 07:23:14 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:14.202299 | orchestrator | 2025-09-19 07:23:14 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:14.203417 | orchestrator | 2025-09-19 07:23:14 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:14.203460 | orchestrator | 2025-09-19 07:23:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:17.251250 | orchestrator | 2025-09-19 07:23:17 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:17.253435 | orchestrator | 2025-09-19 07:23:17 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:17.255386 | orchestrator | 2025-09-19 07:23:17 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:17.258836 | orchestrator | 2025-09-19 07:23:17 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:17.258907 | orchestrator | 2025-09-19 07:23:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:20.321418 | orchestrator | 2025-09-19 07:23:20 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:20.322895 | orchestrator | 2025-09-19 07:23:20 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:20.324646 | orchestrator | 2025-09-19 07:23:20 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:20.326297 | orchestrator | 2025-09-19 07:23:20 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:20.326441 | orchestrator | 2025-09-19 07:23:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:23.364879 | orchestrator | 2025-09-19 07:23:23 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:23.365579 | orchestrator | 2025-09-19 07:23:23 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:23.367131 | orchestrator | 2025-09-19 07:23:23 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:23.367962 | orchestrator | 2025-09-19 07:23:23 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:23.367978 | orchestrator | 2025-09-19 07:23:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:26.416472 | orchestrator | 2025-09-19 07:23:26 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:26.418002 | orchestrator | 2025-09-19 07:23:26 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:26.420335 | orchestrator | 2025-09-19 07:23:26 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:26.421798 | orchestrator | 2025-09-19 07:23:26 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:26.421849 | orchestrator | 2025-09-19 07:23:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:29.463509 | orchestrator | 2025-09-19 07:23:29 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:29.465186 | orchestrator | 2025-09-19 07:23:29 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:29.467052 | orchestrator | 2025-09-19 07:23:29 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:29.469044 | orchestrator | 2025-09-19 07:23:29 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:29.469098 | orchestrator | 2025-09-19 07:23:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:32.508272 | orchestrator | 2025-09-19 07:23:32 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:32.509659 | orchestrator | 2025-09-19 07:23:32 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:32.510623 | orchestrator | 2025-09-19 07:23:32 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:32.512523 | orchestrator | 2025-09-19 07:23:32 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:32.512549 | orchestrator | 2025-09-19 07:23:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:35.547398 | orchestrator | 2025-09-19 07:23:35 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:35.547905 | orchestrator | 2025-09-19 07:23:35 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:35.548670 | orchestrator | 2025-09-19 07:23:35 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:35.549675 | orchestrator | 2025-09-19 07:23:35 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:35.549702 | orchestrator | 2025-09-19 07:23:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:38.575810 | orchestrator | 2025-09-19 07:23:38 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:38.576058 | orchestrator | 2025-09-19 07:23:38 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:38.576603 | orchestrator | 2025-09-19 07:23:38 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:38.577255 | orchestrator | 2025-09-19 07:23:38 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:38.577278 | orchestrator | 2025-09-19 07:23:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:41.626866 | orchestrator | 2025-09-19 07:23:41 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:41.627922 | orchestrator | 2025-09-19 07:23:41 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:41.628843 | orchestrator | 2025-09-19 07:23:41 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:41.629551 | orchestrator | 2025-09-19 07:23:41 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:41.629576 | orchestrator | 2025-09-19 07:23:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:44.666472 | orchestrator | 2025-09-19 07:23:44 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:44.666651 | orchestrator | 2025-09-19 07:23:44 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:44.667447 | orchestrator | 2025-09-19 07:23:44 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:44.668284 | orchestrator | 2025-09-19 07:23:44 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:44.668307 | orchestrator | 2025-09-19 07:23:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:47.717388 | orchestrator | 2025-09-19 07:23:47 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:47.718552 | orchestrator | 2025-09-19 07:23:47 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state STARTED 2025-09-19 07:23:47.720080 | orchestrator | 2025-09-19 07:23:47 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:47.721434 | orchestrator | 2025-09-19 07:23:47 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:47.721481 | orchestrator | 2025-09-19 07:23:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:50.769636 | orchestrator | 2025-09-19 07:23:50 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:50.772767 | orchestrator | 2025-09-19 07:23:50 | INFO  | Task 2a1770ed-d7f8-49be-9d91-ede84a2508ba is in state SUCCESS 2025-09-19 07:23:50.774757 | orchestrator | 2025-09-19 07:23:50.774801 | orchestrator | 2025-09-19 07:23:50.774815 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:23:50.774828 | orchestrator | 2025-09-19 07:23:50.774842 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:23:50.774854 | orchestrator | Friday 19 September 2025 07:22:04 +0000 (0:00:00.186) 0:00:00.186 ****** 2025-09-19 07:23:50.774866 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:23:50.774878 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:23:50.774890 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:23:50.774901 | orchestrator | 2025-09-19 07:23:50.774912 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:23:50.774924 | orchestrator | Friday 19 September 2025 07:22:04 +0000 (0:00:00.299) 0:00:00.486 ****** 2025-09-19 07:23:50.774935 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 07:23:50.774947 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 07:23:50.774958 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 07:23:50.774969 | orchestrator | 2025-09-19 07:23:50.774980 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-19 07:23:50.774991 | orchestrator | 2025-09-19 07:23:50.775003 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-19 07:23:50.775014 | orchestrator | Friday 19 September 2025 07:22:05 +0000 (0:00:00.628) 0:00:01.114 ****** 2025-09-19 07:23:50.775025 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:23:50.775036 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:23:50.775047 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:23:50.775059 | orchestrator | 2025-09-19 07:23:50.775070 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:23:50.775152 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:23:50.775169 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:23:50.775180 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:23:50.775227 | orchestrator | 2025-09-19 07:23:50.775239 | orchestrator | 2025-09-19 07:23:50.775250 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:23:50.775262 | orchestrator | Friday 19 September 2025 07:22:06 +0000 (0:00:00.790) 0:00:01.905 ****** 2025-09-19 07:23:50.775273 | orchestrator | =============================================================================== 2025-09-19 07:23:50.775284 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.79s 2025-09-19 07:23:50.775295 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-09-19 07:23:50.775306 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-19 07:23:50.775317 | orchestrator | 2025-09-19 07:23:50.775328 | orchestrator | 2025-09-19 07:23:50.775339 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:23:50.775350 | orchestrator | 2025-09-19 07:23:50.775364 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:23:50.775377 | orchestrator | Friday 19 September 2025 07:21:52 +0000 (0:00:00.260) 0:00:00.260 ****** 2025-09-19 07:23:50.775390 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:23:50.775403 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:23:50.775415 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:23:50.775428 | orchestrator | 2025-09-19 07:23:50.775441 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:23:50.775453 | orchestrator | Friday 19 September 2025 07:21:53 +0000 (0:00:00.329) 0:00:00.590 ****** 2025-09-19 07:23:50.775466 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-19 07:23:50.775479 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-19 07:23:50.775492 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-19 07:23:50.775505 | orchestrator | 2025-09-19 07:23:50.775518 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-19 07:23:50.775530 | orchestrator | 2025-09-19 07:23:50.775543 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 07:23:50.775556 | orchestrator | Friday 19 September 2025 07:21:53 +0000 (0:00:00.466) 0:00:01.056 ****** 2025-09-19 07:23:50.775569 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:23:50.775582 | orchestrator | 2025-09-19 07:23:50.775595 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-19 07:23:50.775607 | orchestrator | Friday 19 September 2025 07:21:54 +0000 (0:00:00.535) 0:00:01.592 ****** 2025-09-19 07:23:50.775620 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-19 07:23:50.775633 | orchestrator | 2025-09-19 07:23:50.775646 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-19 07:23:50.775658 | orchestrator | Friday 19 September 2025 07:21:57 +0000 (0:00:03.206) 0:00:04.798 ****** 2025-09-19 07:23:50.775671 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-19 07:23:50.775684 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-19 07:23:50.775697 | orchestrator | 2025-09-19 07:23:50.775710 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-19 07:23:50.775722 | orchestrator | Friday 19 September 2025 07:22:03 +0000 (0:00:06.473) 0:00:11.272 ****** 2025-09-19 07:23:50.775733 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:23:50.775753 | orchestrator | 2025-09-19 07:23:50.775765 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-19 07:23:50.775776 | orchestrator | Friday 19 September 2025 07:22:07 +0000 (0:00:03.401) 0:00:14.673 ****** 2025-09-19 07:23:50.775801 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:23:50.775813 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-19 07:23:50.775824 | orchestrator | 2025-09-19 07:23:50.775835 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-19 07:23:50.775847 | orchestrator | Friday 19 September 2025 07:22:11 +0000 (0:00:03.958) 0:00:18.632 ****** 2025-09-19 07:23:50.775858 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:23:50.775869 | orchestrator | 2025-09-19 07:23:50.775880 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-19 07:23:50.775891 | orchestrator | Friday 19 September 2025 07:22:14 +0000 (0:00:03.408) 0:00:22.040 ****** 2025-09-19 07:23:50.775902 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-19 07:23:50.775913 | orchestrator | 2025-09-19 07:23:50.775925 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-19 07:23:50.775936 | orchestrator | Friday 19 September 2025 07:22:18 +0000 (0:00:03.946) 0:00:25.987 ****** 2025-09-19 07:23:50.775947 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:50.775958 | orchestrator | 2025-09-19 07:23:50.775969 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-19 07:23:50.775980 | orchestrator | Friday 19 September 2025 07:22:21 +0000 (0:00:03.305) 0:00:29.292 ****** 2025-09-19 07:23:50.775992 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:50.776003 | orchestrator | 2025-09-19 07:23:50.776014 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-19 07:23:50.776025 | orchestrator | Friday 19 September 2025 07:22:25 +0000 (0:00:03.805) 0:00:33.098 ****** 2025-09-19 07:23:50.776036 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:50.776048 | orchestrator | 2025-09-19 07:23:50.776064 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-19 07:23:50.776076 | orchestrator | Friday 19 September 2025 07:22:29 +0000 (0:00:03.550) 0:00:36.648 ****** 2025-09-19 07:23:50.776091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.776135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.776155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.776176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.776190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.776202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.776214 | orchestrator | 2025-09-19 07:23:50.776225 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-19 07:23:50.776237 | orchestrator | Friday 19 September 2025 07:22:30 +0000 (0:00:01.365) 0:00:38.013 ****** 2025-09-19 07:23:50.776248 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:50.776259 | orchestrator | 2025-09-19 07:23:50.776271 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-19 07:23:50.776282 | orchestrator | Friday 19 September 2025 07:22:30 +0000 (0:00:00.134) 0:00:38.149 ****** 2025-09-19 07:23:50.776293 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:50.776304 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:50.776322 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:50.776333 | orchestrator | 2025-09-19 07:23:50.776344 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-19 07:23:50.776356 | orchestrator | Friday 19 September 2025 07:22:31 +0000 (0:00:00.474) 0:00:38.623 ****** 2025-09-19 07:23:50.776367 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:23:50.776378 | orchestrator | 2025-09-19 07:23:50.776389 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-19 07:23:50.776401 | orchestrator | Friday 19 September 2025 07:22:32 +0000 (0:00:00.905) 0:00:39.528 ****** 2025-09-19 07:23:50.776413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.776472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.776492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.776504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.776523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.776535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.776547 | orchestrator | 2025-09-19 07:23:50.776558 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-19 07:23:50.776575 | orchestrator | Friday 19 September 2025 07:22:34 +0000 (0:00:02.431) 0:00:41.959 ****** 2025-09-19 07:23:50.776587 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:23:50.776598 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:23:50.776610 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:23:50.776621 | orchestrator | 2025-09-19 07:23:50.776632 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 07:23:50.776643 | orchestrator | Friday 19 September 2025 07:22:34 +0000 (0:00:00.319) 0:00:42.278 ****** 2025-09-19 07:23:50.776654 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:23:50.776666 | orchestrator | 2025-09-19 07:23:50.776677 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-19 07:23:50.776688 | orchestrator | Friday 19 September 2025 07:22:35 +0000 (0:00:00.746) 0:00:43.024 ****** 2025-09-19 07:23:50.776704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.776717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.776736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.776748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.776766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.776783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.776795 | orchestrator | 2025-09-19 07:23:50.776806 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-19 07:23:50.776818 | orchestrator | Friday 19 September 2025 07:22:38 +0000 (0:00:02.407) 0:00:45.432 ****** 2025-09-19 07:23:50.776837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:23:50.776853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:23:50.776873 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:50.776902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:23:50.776923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:23:50.776942 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:50.776971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:23:50.777018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:23:50.777040 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:50.777053 | orchestrator | 2025-09-19 07:23:50.777065 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-19 07:23:50.777076 | orchestrator | Friday 19 September 2025 07:22:38 +0000 (0:00:00.637) 0:00:46.069 ****** 2025-09-19 07:23:50.777087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:23:50.777141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:23:50.777164 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:50.777189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:23:50.777216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:23:50.777228 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:50.777239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:23:50.777251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:23:50.777262 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:50.777273 | orchestrator | 2025-09-19 07:23:50.777285 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-19 07:23:50.777296 | orchestrator | Friday 19 September 2025 07:22:39 +0000 (0:00:01.251) 0:00:47.321 ****** 2025-09-19 07:23:50.777316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.777340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.777353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.777365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.777383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.777396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.777414 | orchestrator | 2025-09-19 07:23:50.777425 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-19 07:23:50.777437 | orchestrator | Friday 19 September 2025 07:22:42 +0000 (0:00:02.878) 0:00:50.199 ****** 2025-09-19 07:23:50.777453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.777465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.777477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.777496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.777516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.777578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.777599 | orchestrator | 2025-09-19 07:23:50.777616 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-19 07:23:50.777633 | orchestrator | Friday 19 September 2025 07:22:50 +0000 (0:00:08.043) 0:00:58.242 ****** 2025-09-19 07:23:50.777651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:23:50.777669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:23:50.777687 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:50.777719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:23:50.777758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:23:50.777779 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:50.777798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:23:50.777817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:23:50.777838 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:50.777855 | orchestrator | 2025-09-19 07:23:50.777874 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-19 07:23:50.777886 | orchestrator | Friday 19 September 2025 07:22:51 +0000 (0:00:00.806) 0:00:59.049 ****** 2025-09-19 07:23:50.777905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.777926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.777943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:23:50.777956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.777967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.777986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:50.778005 | orchestrator | 2025-09-19 07:23:50.778086 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 07:23:50.778145 | orchestrator | Friday 19 September 2025 07:22:53 +0000 (0:00:02.126) 0:01:01.175 ****** 2025-09-19 07:23:50.778158 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:50.778170 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:50.778181 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:50.778193 | orchestrator | 2025-09-19 07:23:50.778204 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-19 07:23:50.778230 | orchestrator | Friday 19 September 2025 07:22:54 +0000 (0:00:00.308) 0:01:01.484 ****** 2025-09-19 07:23:50.778251 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:50.778263 | orchestrator | 2025-09-19 07:23:50.778274 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-19 07:23:50.778285 | orchestrator | Friday 19 September 2025 07:22:56 +0000 (0:00:02.093) 0:01:03.578 ****** 2025-09-19 07:23:50.778296 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:50.778307 | orchestrator | 2025-09-19 07:23:50.778319 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-19 07:23:50.778330 | orchestrator | Friday 19 September 2025 07:22:58 +0000 (0:00:02.183) 0:01:05.762 ****** 2025-09-19 07:23:50.778341 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:50.778352 | orchestrator | 2025-09-19 07:23:50.778363 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 07:23:50.778374 | orchestrator | Friday 19 September 2025 07:23:16 +0000 (0:00:18.524) 0:01:24.287 ****** 2025-09-19 07:23:50.778385 | orchestrator | 2025-09-19 07:23:50.778402 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 07:23:50.778414 | orchestrator | Friday 19 September 2025 07:23:16 +0000 (0:00:00.065) 0:01:24.353 ****** 2025-09-19 07:23:50.778425 | orchestrator | 2025-09-19 07:23:50.778436 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 07:23:50.778447 | orchestrator | Friday 19 September 2025 07:23:17 +0000 (0:00:00.062) 0:01:24.415 ****** 2025-09-19 07:23:50.778458 | orchestrator | 2025-09-19 07:23:50.778469 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-19 07:23:50.778480 | orchestrator | Friday 19 September 2025 07:23:17 +0000 (0:00:00.065) 0:01:24.481 ****** 2025-09-19 07:23:50.778491 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:50.778502 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:23:50.778513 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:23:50.778524 | orchestrator | 2025-09-19 07:23:50.778535 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-19 07:23:50.778546 | orchestrator | Friday 19 September 2025 07:23:36 +0000 (0:00:18.986) 0:01:43.467 ****** 2025-09-19 07:23:50.778557 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:50.778568 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:23:50.778579 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:23:50.778590 | orchestrator | 2025-09-19 07:23:50.778601 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:23:50.778612 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:23:50.778625 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:23:50.778636 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:23:50.778647 | orchestrator | 2025-09-19 07:23:50.778658 | orchestrator | 2025-09-19 07:23:50.778669 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:23:50.778694 | orchestrator | Friday 19 September 2025 07:23:48 +0000 (0:00:11.967) 0:01:55.435 ****** 2025-09-19 07:23:50.778705 | orchestrator | =============================================================================== 2025-09-19 07:23:50.778716 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.99s 2025-09-19 07:23:50.778727 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.52s 2025-09-19 07:23:50.778738 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.97s 2025-09-19 07:23:50.778749 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 8.04s 2025-09-19 07:23:50.778760 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.47s 2025-09-19 07:23:50.778771 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.96s 2025-09-19 07:23:50.778782 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.95s 2025-09-19 07:23:50.778793 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.81s 2025-09-19 07:23:50.778804 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.55s 2025-09-19 07:23:50.778815 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.41s 2025-09-19 07:23:50.778826 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.40s 2025-09-19 07:23:50.778837 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.31s 2025-09-19 07:23:50.778848 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.21s 2025-09-19 07:23:50.778859 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.88s 2025-09-19 07:23:50.778870 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.43s 2025-09-19 07:23:50.778881 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.41s 2025-09-19 07:23:50.778900 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.18s 2025-09-19 07:23:50.778912 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.13s 2025-09-19 07:23:50.778923 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.09s 2025-09-19 07:23:50.778934 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.37s 2025-09-19 07:23:50.778945 | orchestrator | 2025-09-19 07:23:50 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:50.778957 | orchestrator | 2025-09-19 07:23:50 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:50.778968 | orchestrator | 2025-09-19 07:23:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:53.829284 | orchestrator | 2025-09-19 07:23:53 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:53.831185 | orchestrator | 2025-09-19 07:23:53 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:53.832679 | orchestrator | 2025-09-19 07:23:53 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:53.832707 | orchestrator | 2025-09-19 07:23:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:56.874774 | orchestrator | 2025-09-19 07:23:56 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:56.875170 | orchestrator | 2025-09-19 07:23:56 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:56.876243 | orchestrator | 2025-09-19 07:23:56 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:56.876270 | orchestrator | 2025-09-19 07:23:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:59.913428 | orchestrator | 2025-09-19 07:23:59 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:23:59.917064 | orchestrator | 2025-09-19 07:23:59 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:23:59.919326 | orchestrator | 2025-09-19 07:23:59 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:23:59.919659 | orchestrator | 2025-09-19 07:23:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:02.966224 | orchestrator | 2025-09-19 07:24:02 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:24:02.966984 | orchestrator | 2025-09-19 07:24:02 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:02.968654 | orchestrator | 2025-09-19 07:24:02 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:24:02.968682 | orchestrator | 2025-09-19 07:24:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:06.012980 | orchestrator | 2025-09-19 07:24:06 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:24:06.015196 | orchestrator | 2025-09-19 07:24:06 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:06.017662 | orchestrator | 2025-09-19 07:24:06 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:24:06.017687 | orchestrator | 2025-09-19 07:24:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:09.060411 | orchestrator | 2025-09-19 07:24:09 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:24:09.063010 | orchestrator | 2025-09-19 07:24:09 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:09.065903 | orchestrator | 2025-09-19 07:24:09 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:24:09.065969 | orchestrator | 2025-09-19 07:24:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:12.114441 | orchestrator | 2025-09-19 07:24:12 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state STARTED 2025-09-19 07:24:12.114525 | orchestrator | 2025-09-19 07:24:12 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:12.115348 | orchestrator | 2025-09-19 07:24:12 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:24:12.115894 | orchestrator | 2025-09-19 07:24:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:15.164727 | orchestrator | 2025-09-19 07:24:15 | INFO  | Task 36f41144-48e8-465b-8b13-41b23056b1d4 is in state SUCCESS 2025-09-19 07:24:15.166700 | orchestrator | 2025-09-19 07:24:15.166833 | orchestrator | 2025-09-19 07:24:15.166918 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:24:15.166937 | orchestrator | 2025-09-19 07:24:15.166949 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:24:15.166961 | orchestrator | Friday 19 September 2025 07:22:02 +0000 (0:00:00.197) 0:00:00.197 ****** 2025-09-19 07:24:15.166973 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:15.166985 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:24:15.167011 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:24:15.167023 | orchestrator | 2025-09-19 07:24:15.167035 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:24:15.167046 | orchestrator | Friday 19 September 2025 07:22:03 +0000 (0:00:00.295) 0:00:00.492 ****** 2025-09-19 07:24:15.167057 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-19 07:24:15.167069 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-19 07:24:15.167080 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-19 07:24:15.167091 | orchestrator | 2025-09-19 07:24:15.167128 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-19 07:24:15.167174 | orchestrator | 2025-09-19 07:24:15.167185 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 07:24:15.167197 | orchestrator | Friday 19 September 2025 07:22:03 +0000 (0:00:00.600) 0:00:01.093 ****** 2025-09-19 07:24:15.167208 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:15.167222 | orchestrator | 2025-09-19 07:24:15.167235 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-19 07:24:15.167247 | orchestrator | Friday 19 September 2025 07:22:04 +0000 (0:00:00.581) 0:00:01.675 ****** 2025-09-19 07:24:15.167277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167322 | orchestrator | 2025-09-19 07:24:15.167336 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-19 07:24:15.167348 | orchestrator | Friday 19 September 2025 07:22:05 +0000 (0:00:00.842) 0:00:02.517 ****** 2025-09-19 07:24:15.167361 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-19 07:24:15.167375 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-19 07:24:15.167388 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:24:15.167400 | orchestrator | 2025-09-19 07:24:15.167411 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 07:24:15.167422 | orchestrator | Friday 19 September 2025 07:22:05 +0000 (0:00:00.768) 0:00:03.285 ****** 2025-09-19 07:24:15.167433 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:15.167444 | orchestrator | 2025-09-19 07:24:15.167455 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-19 07:24:15.167466 | orchestrator | Friday 19 September 2025 07:22:06 +0000 (0:00:00.609) 0:00:03.895 ****** 2025-09-19 07:24:15.167494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167544 | orchestrator | 2025-09-19 07:24:15.167555 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-19 07:24:15.167566 | orchestrator | Friday 19 September 2025 07:22:07 +0000 (0:00:01.277) 0:00:05.172 ****** 2025-09-19 07:24:15.167578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:24:15.167590 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:15.167601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:24:15.167613 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:15.167632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:24:15.167651 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:15.167662 | orchestrator | 2025-09-19 07:24:15.167673 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-19 07:24:15.167684 | orchestrator | Friday 19 September 2025 07:22:08 +0000 (0:00:00.327) 0:00:05.500 ****** 2025-09-19 07:24:15.167695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:24:15.167711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:24:15.167723 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:15.167734 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:15.167746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:24:15.167757 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:15.167768 | orchestrator | 2025-09-19 07:24:15.167779 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-19 07:24:15.167790 | orchestrator | Friday 19 September 2025 07:22:08 +0000 (0:00:00.707) 0:00:06.207 ****** 2025-09-19 07:24:15.167802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167850 | orchestrator | 2025-09-19 07:24:15.167861 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-19 07:24:15.167872 | orchestrator | Friday 19 September 2025 07:22:10 +0000 (0:00:01.207) 0:00:07.415 ****** 2025-09-19 07:24:15.167888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.167924 | orchestrator | 2025-09-19 07:24:15.167935 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-19 07:24:15.167946 | orchestrator | Friday 19 September 2025 07:22:11 +0000 (0:00:01.277) 0:00:08.692 ****** 2025-09-19 07:24:15.167963 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:15.167974 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:15.167985 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:15.167996 | orchestrator | 2025-09-19 07:24:15.168007 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-19 07:24:15.168018 | orchestrator | Friday 19 September 2025 07:22:11 +0000 (0:00:00.469) 0:00:09.162 ****** 2025-09-19 07:24:15.168030 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 07:24:15.168041 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 07:24:15.168052 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 07:24:15.168063 | orchestrator | 2025-09-19 07:24:15.168074 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-19 07:24:15.168085 | orchestrator | Friday 19 September 2025 07:22:13 +0000 (0:00:01.269) 0:00:10.431 ****** 2025-09-19 07:24:15.168096 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 07:24:15.168114 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 07:24:15.168125 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 07:24:15.168154 | orchestrator | 2025-09-19 07:24:15.168166 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-19 07:24:15.168177 | orchestrator | Friday 19 September 2025 07:22:14 +0000 (0:00:01.299) 0:00:11.731 ****** 2025-09-19 07:24:15.168188 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:24:15.168199 | orchestrator | 2025-09-19 07:24:15.168210 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-19 07:24:15.168221 | orchestrator | Friday 19 September 2025 07:22:15 +0000 (0:00:00.667) 0:00:12.399 ****** 2025-09-19 07:24:15.168232 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-19 07:24:15.168243 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-19 07:24:15.168254 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:15.168265 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:24:15.168277 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:24:15.168288 | orchestrator | 2025-09-19 07:24:15.168299 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-19 07:24:15.168310 | orchestrator | Friday 19 September 2025 07:22:15 +0000 (0:00:00.659) 0:00:13.058 ****** 2025-09-19 07:24:15.168321 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:15.168332 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:15.168343 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:15.168354 | orchestrator | 2025-09-19 07:24:15.168365 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-19 07:24:15.168376 | orchestrator | Friday 19 September 2025 07:22:16 +0000 (0:00:00.475) 0:00:13.534 ****** 2025-09-19 07:24:15.168392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094797, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9269843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094797, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9269843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094797, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9269843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094884, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9413033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094884, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9413033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094884, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9413033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094819, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9298882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094819, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9298882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094819, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9298882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094885, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9434288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094885, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9434288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094885, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9434288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094853, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9329915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094853, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9329915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094853, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9329915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094875, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9396584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094875, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9396584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094875, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9396584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094796, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9237926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094796, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9237926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094796, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9237926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094809, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9274416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.168744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094809, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9274416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094809, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9274416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094829, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9303563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094829, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9303563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094829, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9303563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094862, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.934523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094862, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.934523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094862, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.934523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094881, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.940789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094881, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.940789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094881, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.940789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094813, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9287255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094813, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9287255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094813, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9287255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094872, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9392464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094872, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9392464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094872, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9392464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094855, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.933344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094855, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.933344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094855, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.933344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094847, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9322867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094847, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9322867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094847, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9322867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094843, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9317858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094843, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9317858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094843, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9317858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094868, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9385831, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094868, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9385831, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094868, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9385831, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094835, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.931411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094835, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.931411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094835, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.931411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094879, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.940789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094879, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.940789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094879, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.940789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095081, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.997793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095081, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.997793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095081, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.997793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094935, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9565253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094935, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9565253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094935, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9565253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094918, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.946606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094918, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.946606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094918, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.946606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094964, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9604719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094964, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9604719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094964, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9604719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094901, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9444046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.169991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094901, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9444046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094901, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9444046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095009, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.976355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095009, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.976355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095009, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.976355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094966, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.972626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094966, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.972626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094966, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.972626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1095015, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.977468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1095015, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.977468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1095015, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.977468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095077, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.995793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095077, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.995793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095077, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.995793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1095007, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.974793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1095007, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.974793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1095007, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.974793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094960, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9580662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094960, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9580662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094960, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9580662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094930, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9494314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094930, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9494314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094930, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9494314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094954, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.957735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094954, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.957735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094954, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.957735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094922, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9487894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094922, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9487894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094922, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9487894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094963, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.958793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094963, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.958793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094963, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.958793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095028, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.993793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095028, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.993793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095028, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.993793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095022, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.981793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095022, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.981793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095022, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.981793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094909, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9449728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094909, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9449728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094909, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9449728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094916, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9460394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094916, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9460394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094916, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9460394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095002, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9742105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095002, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9742105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095002, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9742105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1095018, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9785702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1095018, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9785702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1095018, 'dev': 137, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758263618.9785702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:24:15.170727 | orchestrator | 2025-09-19 07:24:15.170738 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-19 07:24:15.170748 | orchestrator | Friday 19 September 2025 07:22:54 +0000 (0:00:38.235) 0:00:51.769 ****** 2025-09-19 07:24:15.170758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.170769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.170784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:24:15.170794 | orchestrator | 2025-09-19 07:24:15.170804 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-19 07:24:15.170814 | orchestrator | Friday 19 September 2025 07:22:55 +0000 (0:00:00.938) 0:00:52.707 ****** 2025-09-19 07:24:15.170824 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:15.170834 | orchestrator | 2025-09-19 07:24:15.170843 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-19 07:24:15.170858 | orchestrator | Friday 19 September 2025 07:22:57 +0000 (0:00:02.298) 0:00:55.006 ****** 2025-09-19 07:24:15.170868 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:15.170878 | orchestrator | 2025-09-19 07:24:15.170888 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 07:24:15.170898 | orchestrator | Friday 19 September 2025 07:22:59 +0000 (0:00:02.214) 0:00:57.220 ****** 2025-09-19 07:24:15.170907 | orchestrator | 2025-09-19 07:24:15.170917 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 07:24:15.170927 | orchestrator | Friday 19 September 2025 07:23:00 +0000 (0:00:00.175) 0:00:57.396 ****** 2025-09-19 07:24:15.170937 | orchestrator | 2025-09-19 07:24:15.170947 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 07:24:15.170957 | orchestrator | Friday 19 September 2025 07:23:00 +0000 (0:00:00.060) 0:00:57.456 ****** 2025-09-19 07:24:15.170967 | orchestrator | 2025-09-19 07:24:15.170977 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-19 07:24:15.170986 | orchestrator | Friday 19 September 2025 07:23:00 +0000 (0:00:00.063) 0:00:57.519 ****** 2025-09-19 07:24:15.170996 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:15.171006 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:15.171016 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:15.171026 | orchestrator | 2025-09-19 07:24:15.171035 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-19 07:24:15.171045 | orchestrator | Friday 19 September 2025 07:23:01 +0000 (0:00:01.688) 0:00:59.207 ****** 2025-09-19 07:24:15.171055 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:15.171065 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:15.171075 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-19 07:24:15.171085 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-19 07:24:15.171099 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-19 07:24:15.171109 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:15.171119 | orchestrator | 2025-09-19 07:24:15.171129 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-19 07:24:15.171187 | orchestrator | Friday 19 September 2025 07:23:40 +0000 (0:00:38.662) 0:01:37.870 ****** 2025-09-19 07:24:15.171198 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:15.171208 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:15.171218 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:15.171227 | orchestrator | 2025-09-19 07:24:15.171237 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-19 07:24:15.171247 | orchestrator | Friday 19 September 2025 07:24:09 +0000 (0:00:28.674) 0:02:06.544 ****** 2025-09-19 07:24:15.171257 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:15.171267 | orchestrator | 2025-09-19 07:24:15.171277 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-19 07:24:15.171287 | orchestrator | Friday 19 September 2025 07:24:11 +0000 (0:00:02.244) 0:02:08.788 ****** 2025-09-19 07:24:15.171297 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:15.171307 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:15.171317 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:15.171327 | orchestrator | 2025-09-19 07:24:15.171337 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-19 07:24:15.171347 | orchestrator | Friday 19 September 2025 07:24:12 +0000 (0:00:00.619) 0:02:09.408 ****** 2025-09-19 07:24:15.171357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-19 07:24:15.171369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-19 07:24:15.171380 | orchestrator | 2025-09-19 07:24:15.171389 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-19 07:24:15.171399 | orchestrator | Friday 19 September 2025 07:24:14 +0000 (0:00:02.378) 0:02:11.787 ****** 2025-09-19 07:24:15.171409 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:15.171419 | orchestrator | 2025-09-19 07:24:15.171429 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:24:15.171437 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:24:15.171446 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:24:15.171454 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:24:15.171462 | orchestrator | 2025-09-19 07:24:15.171470 | orchestrator | 2025-09-19 07:24:15.171478 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:24:15.171487 | orchestrator | Friday 19 September 2025 07:24:14 +0000 (0:00:00.232) 0:02:12.019 ****** 2025-09-19 07:24:15.171495 | orchestrator | =============================================================================== 2025-09-19 07:24:15.171503 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.66s 2025-09-19 07:24:15.171516 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.24s 2025-09-19 07:24:15.171525 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.67s 2025-09-19 07:24:15.171532 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.38s 2025-09-19 07:24:15.171541 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.30s 2025-09-19 07:24:15.171549 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.24s 2025-09-19 07:24:15.171562 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.21s 2025-09-19 07:24:15.171570 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.69s 2025-09-19 07:24:15.171578 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.30s 2025-09-19 07:24:15.171586 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.28s 2025-09-19 07:24:15.171594 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.28s 2025-09-19 07:24:15.171602 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2025-09-19 07:24:15.171610 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.21s 2025-09-19 07:24:15.171618 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.94s 2025-09-19 07:24:15.171626 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.84s 2025-09-19 07:24:15.171634 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.77s 2025-09-19 07:24:15.171642 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.71s 2025-09-19 07:24:15.171654 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.67s 2025-09-19 07:24:15.171662 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.66s 2025-09-19 07:24:15.171670 | orchestrator | grafana : Remove old grafana docker volume ------------------------------ 0.62s 2025-09-19 07:24:15.171678 | orchestrator | 2025-09-19 07:24:15 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:15.171686 | orchestrator | 2025-09-19 07:24:15 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:24:15.171695 | orchestrator | 2025-09-19 07:24:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:18.211280 | orchestrator | 2025-09-19 07:24:18 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:18.213310 | orchestrator | 2025-09-19 07:24:18 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state STARTED 2025-09-19 07:24:18.213586 | orchestrator | 2025-09-19 07:24:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:21.253759 | orchestrator | 2025-09-19 07:24:21 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:21.260219 | orchestrator | 2025-09-19 07:24:21 | INFO  | Task 17834c6f-6f25-4aec-a434-2af979b0101a is in state SUCCESS 2025-09-19 07:24:21.262896 | orchestrator | 2025-09-19 07:24:21.262947 | orchestrator | 2025-09-19 07:24:21.262961 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:24:21.262974 | orchestrator | 2025-09-19 07:24:21.262995 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-19 07:24:21.263007 | orchestrator | Friday 19 September 2025 07:15:20 +0000 (0:00:00.418) 0:00:00.418 ****** 2025-09-19 07:24:21.263019 | orchestrator | changed: [testbed-manager] 2025-09-19 07:24:21.263032 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.263043 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:21.263055 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:21.263066 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.263077 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.263089 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.263100 | orchestrator | 2025-09-19 07:24:21.263112 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:24:21.263123 | orchestrator | Friday 19 September 2025 07:15:21 +0000 (0:00:00.942) 0:00:01.360 ****** 2025-09-19 07:24:21.263134 | orchestrator | changed: [testbed-manager] 2025-09-19 07:24:21.263169 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.263180 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:21.263192 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:21.263228 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.263240 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.263251 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.263262 | orchestrator | 2025-09-19 07:24:21.263274 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:24:21.263285 | orchestrator | Friday 19 September 2025 07:15:22 +0000 (0:00:00.893) 0:00:02.253 ****** 2025-09-19 07:24:21.263297 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-19 07:24:21.263309 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 07:24:21.263320 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 07:24:21.263331 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 07:24:21.263342 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-19 07:24:21.263353 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-19 07:24:21.263364 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-19 07:24:21.263377 | orchestrator | 2025-09-19 07:24:21.263388 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-19 07:24:21.263399 | orchestrator | 2025-09-19 07:24:21.263411 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 07:24:21.263422 | orchestrator | Friday 19 September 2025 07:15:23 +0000 (0:00:01.130) 0:00:03.384 ****** 2025-09-19 07:24:21.263433 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:21.263445 | orchestrator | 2025-09-19 07:24:21.263456 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-19 07:24:21.263468 | orchestrator | Friday 19 September 2025 07:15:24 +0000 (0:00:00.710) 0:00:04.095 ****** 2025-09-19 07:24:21.263479 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-19 07:24:21.263493 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-19 07:24:21.263506 | orchestrator | 2025-09-19 07:24:21.263519 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-19 07:24:21.263531 | orchestrator | Friday 19 September 2025 07:15:27 +0000 (0:00:03.411) 0:00:07.506 ****** 2025-09-19 07:24:21.263544 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:24:21.263557 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:24:21.263570 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.263583 | orchestrator | 2025-09-19 07:24:21.263596 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 07:24:21.263608 | orchestrator | Friday 19 September 2025 07:15:31 +0000 (0:00:03.789) 0:00:11.296 ****** 2025-09-19 07:24:21.263628 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.263648 | orchestrator | 2025-09-19 07:24:21.263668 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-19 07:24:21.263688 | orchestrator | Friday 19 September 2025 07:15:32 +0000 (0:00:00.958) 0:00:12.254 ****** 2025-09-19 07:24:21.263710 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.263733 | orchestrator | 2025-09-19 07:24:21.263754 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-19 07:24:21.263793 | orchestrator | Friday 19 September 2025 07:15:33 +0000 (0:00:01.507) 0:00:13.762 ****** 2025-09-19 07:24:21.263815 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.263836 | orchestrator | 2025-09-19 07:24:21.263852 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 07:24:21.263870 | orchestrator | Friday 19 September 2025 07:15:37 +0000 (0:00:03.725) 0:00:17.487 ****** 2025-09-19 07:24:21.263888 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.263907 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.263927 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.263940 | orchestrator | 2025-09-19 07:24:21.263959 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 07:24:21.263978 | orchestrator | Friday 19 September 2025 07:15:37 +0000 (0:00:00.433) 0:00:17.921 ****** 2025-09-19 07:24:21.264011 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:21.264030 | orchestrator | 2025-09-19 07:24:21.264050 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-19 07:24:21.264069 | orchestrator | Friday 19 September 2025 07:16:05 +0000 (0:00:27.497) 0:00:45.419 ****** 2025-09-19 07:24:21.264090 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.264109 | orchestrator | 2025-09-19 07:24:21.264127 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 07:24:21.264170 | orchestrator | Friday 19 September 2025 07:16:18 +0000 (0:00:13.339) 0:00:58.759 ****** 2025-09-19 07:24:21.264191 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:21.264211 | orchestrator | 2025-09-19 07:24:21.264231 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 07:24:21.264251 | orchestrator | Friday 19 September 2025 07:16:31 +0000 (0:00:12.360) 0:01:11.119 ****** 2025-09-19 07:24:21.264293 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:21.264313 | orchestrator | 2025-09-19 07:24:21.264327 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-19 07:24:21.264339 | orchestrator | Friday 19 September 2025 07:16:32 +0000 (0:00:01.118) 0:01:12.238 ****** 2025-09-19 07:24:21.264350 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.264361 | orchestrator | 2025-09-19 07:24:21.264372 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 07:24:21.264384 | orchestrator | Friday 19 September 2025 07:16:32 +0000 (0:00:00.557) 0:01:12.795 ****** 2025-09-19 07:24:21.264395 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:21.264406 | orchestrator | 2025-09-19 07:24:21.264423 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 07:24:21.264442 | orchestrator | Friday 19 September 2025 07:16:33 +0000 (0:00:00.564) 0:01:13.360 ****** 2025-09-19 07:24:21.264462 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:21.264481 | orchestrator | 2025-09-19 07:24:21.264501 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 07:24:21.264519 | orchestrator | Friday 19 September 2025 07:16:51 +0000 (0:00:18.032) 0:01:31.395 ****** 2025-09-19 07:24:21.264539 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.264558 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.264577 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.264589 | orchestrator | 2025-09-19 07:24:21.264601 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-19 07:24:21.264612 | orchestrator | 2025-09-19 07:24:21.264623 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 07:24:21.264634 | orchestrator | Friday 19 September 2025 07:16:52 +0000 (0:00:00.687) 0:01:32.082 ****** 2025-09-19 07:24:21.264645 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:21.264656 | orchestrator | 2025-09-19 07:24:21.264667 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-19 07:24:21.264678 | orchestrator | Friday 19 September 2025 07:16:53 +0000 (0:00:00.949) 0:01:33.032 ****** 2025-09-19 07:24:21.264689 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.264700 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.264711 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.264722 | orchestrator | 2025-09-19 07:24:21.264734 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-19 07:24:21.264745 | orchestrator | Friday 19 September 2025 07:16:55 +0000 (0:00:02.167) 0:01:35.199 ****** 2025-09-19 07:24:21.264756 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.264767 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.264778 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.264789 | orchestrator | 2025-09-19 07:24:21.264800 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 07:24:21.264822 | orchestrator | Friday 19 September 2025 07:16:57 +0000 (0:00:02.230) 0:01:37.430 ****** 2025-09-19 07:24:21.264833 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.264844 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.264855 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.264866 | orchestrator | 2025-09-19 07:24:21.264893 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 07:24:21.264904 | orchestrator | Friday 19 September 2025 07:16:57 +0000 (0:00:00.291) 0:01:37.722 ****** 2025-09-19 07:24:21.264925 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 07:24:21.264936 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.264948 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 07:24:21.264959 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.264970 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 07:24:21.264981 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-19 07:24:21.264992 | orchestrator | 2025-09-19 07:24:21.265003 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 07:24:21.265014 | orchestrator | Friday 19 September 2025 07:17:07 +0000 (0:00:09.369) 0:01:47.091 ****** 2025-09-19 07:24:21.265025 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.265036 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265047 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265058 | orchestrator | 2025-09-19 07:24:21.265069 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 07:24:21.265088 | orchestrator | Friday 19 September 2025 07:17:07 +0000 (0:00:00.732) 0:01:47.823 ****** 2025-09-19 07:24:21.265099 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 07:24:21.265110 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.265122 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 07:24:21.265133 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265182 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 07:24:21.265194 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265205 | orchestrator | 2025-09-19 07:24:21.265217 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 07:24:21.265228 | orchestrator | Friday 19 September 2025 07:17:08 +0000 (0:00:01.056) 0:01:48.880 ****** 2025-09-19 07:24:21.265239 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265250 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.265261 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265273 | orchestrator | 2025-09-19 07:24:21.265284 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-19 07:24:21.265295 | orchestrator | Friday 19 September 2025 07:17:09 +0000 (0:00:00.800) 0:01:49.681 ****** 2025-09-19 07:24:21.265306 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265317 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265328 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.265339 | orchestrator | 2025-09-19 07:24:21.265350 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-19 07:24:21.265361 | orchestrator | Friday 19 September 2025 07:17:10 +0000 (0:00:01.194) 0:01:50.875 ****** 2025-09-19 07:24:21.265373 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265384 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265403 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.265415 | orchestrator | 2025-09-19 07:24:21.265426 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-19 07:24:21.265437 | orchestrator | Friday 19 September 2025 07:17:14 +0000 (0:00:03.148) 0:01:54.024 ****** 2025-09-19 07:24:21.265448 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265460 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265471 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:21.265482 | orchestrator | 2025-09-19 07:24:21.265493 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 07:24:21.265511 | orchestrator | Friday 19 September 2025 07:17:33 +0000 (0:00:19.661) 0:02:13.685 ****** 2025-09-19 07:24:21.265522 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265533 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265545 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:21.265556 | orchestrator | 2025-09-19 07:24:21.265567 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 07:24:21.265578 | orchestrator | Friday 19 September 2025 07:17:46 +0000 (0:00:12.851) 0:02:26.537 ****** 2025-09-19 07:24:21.265589 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:21.265600 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265611 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265622 | orchestrator | 2025-09-19 07:24:21.265633 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-19 07:24:21.265644 | orchestrator | Friday 19 September 2025 07:17:47 +0000 (0:00:00.748) 0:02:27.285 ****** 2025-09-19 07:24:21.265655 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265666 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265677 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.265688 | orchestrator | 2025-09-19 07:24:21.265699 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-19 07:24:21.265710 | orchestrator | Friday 19 September 2025 07:17:58 +0000 (0:00:11.195) 0:02:38.481 ****** 2025-09-19 07:24:21.265722 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.265732 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265744 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265755 | orchestrator | 2025-09-19 07:24:21.265766 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 07:24:21.265777 | orchestrator | Friday 19 September 2025 07:17:59 +0000 (0:00:01.202) 0:02:39.684 ****** 2025-09-19 07:24:21.265788 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.265799 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.265810 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.265821 | orchestrator | 2025-09-19 07:24:21.265832 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-19 07:24:21.265843 | orchestrator | 2025-09-19 07:24:21.265854 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 07:24:21.265865 | orchestrator | Friday 19 September 2025 07:17:59 +0000 (0:00:00.316) 0:02:40.000 ****** 2025-09-19 07:24:21.265876 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:21.265889 | orchestrator | 2025-09-19 07:24:21.265900 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-19 07:24:21.265911 | orchestrator | Friday 19 September 2025 07:18:00 +0000 (0:00:00.484) 0:02:40.484 ****** 2025-09-19 07:24:21.265922 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-19 07:24:21.265933 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-19 07:24:21.265944 | orchestrator | 2025-09-19 07:24:21.265955 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-19 07:24:21.265966 | orchestrator | Friday 19 September 2025 07:18:03 +0000 (0:00:03.269) 0:02:43.753 ****** 2025-09-19 07:24:21.265978 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-19 07:24:21.265990 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-19 07:24:21.266002 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-19 07:24:21.266013 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-19 07:24:21.266127 | orchestrator | 2025-09-19 07:24:21.266202 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-19 07:24:21.266224 | orchestrator | Friday 19 September 2025 07:18:09 +0000 (0:00:06.092) 0:02:49.846 ****** 2025-09-19 07:24:21.266236 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:24:21.266247 | orchestrator | 2025-09-19 07:24:21.266258 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-19 07:24:21.266269 | orchestrator | Friday 19 September 2025 07:18:13 +0000 (0:00:03.206) 0:02:53.053 ****** 2025-09-19 07:24:21.266280 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:24:21.266291 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-19 07:24:21.266303 | orchestrator | 2025-09-19 07:24:21.266314 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-19 07:24:21.266325 | orchestrator | Friday 19 September 2025 07:18:16 +0000 (0:00:03.855) 0:02:56.909 ****** 2025-09-19 07:24:21.266336 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:24:21.266347 | orchestrator | 2025-09-19 07:24:21.266357 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-19 07:24:21.266367 | orchestrator | Friday 19 September 2025 07:18:20 +0000 (0:00:03.459) 0:03:00.369 ****** 2025-09-19 07:24:21.266377 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-19 07:24:21.266387 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-19 07:24:21.266396 | orchestrator | 2025-09-19 07:24:21.266406 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 07:24:21.266424 | orchestrator | Friday 19 September 2025 07:18:28 +0000 (0:00:07.708) 0:03:08.077 ****** 2025-09-19 07:24:21.266441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.266456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.266482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.266501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.266514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.266525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.266536 | orchestrator | 2025-09-19 07:24:21.266546 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-19 07:24:21.266556 | orchestrator | Friday 19 September 2025 07:18:30 +0000 (0:00:02.002) 0:03:10.079 ****** 2025-09-19 07:24:21.266566 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.266576 | orchestrator | 2025-09-19 07:24:21.266586 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-19 07:24:21.266596 | orchestrator | Friday 19 September 2025 07:18:30 +0000 (0:00:00.269) 0:03:10.349 ****** 2025-09-19 07:24:21.266606 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.266616 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.266626 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.266636 | orchestrator | 2025-09-19 07:24:21.266646 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-19 07:24:21.266663 | orchestrator | Friday 19 September 2025 07:18:30 +0000 (0:00:00.657) 0:03:11.007 ****** 2025-09-19 07:24:21.266673 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:24:21.266683 | orchestrator | 2025-09-19 07:24:21.266693 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-19 07:24:21.266703 | orchestrator | Friday 19 September 2025 07:18:32 +0000 (0:00:01.289) 0:03:12.296 ****** 2025-09-19 07:24:21.266713 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.266723 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.266733 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.266743 | orchestrator | 2025-09-19 07:24:21.266753 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 07:24:21.266762 | orchestrator | Friday 19 September 2025 07:18:32 +0000 (0:00:00.613) 0:03:12.910 ****** 2025-09-19 07:24:21.266772 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:21.266782 | orchestrator | 2025-09-19 07:24:21.266792 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 07:24:21.266802 | orchestrator | Friday 19 September 2025 07:18:33 +0000 (0:00:00.674) 0:03:13.584 ****** 2025-09-19 07:24:21.266822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.266835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.266847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.266869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.266880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.266899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.266910 | orchestrator | 2025-09-19 07:24:21.266920 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 07:24:21.266930 | orchestrator | Friday 19 September 2025 07:18:36 +0000 (0:00:02.918) 0:03:16.503 ****** 2025-09-19 07:24:21.266941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:24:21.266963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.266974 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.266993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:24:21.267032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.267050 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.267069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:24:21.267089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.267100 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.267110 | orchestrator | 2025-09-19 07:24:21.267120 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 07:24:21.267130 | orchestrator | Friday 19 September 2025 07:18:37 +0000 (0:00:00.650) 0:03:17.153 ****** 2025-09-19 07:24:21.267163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:24:21.267175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.267185 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.267204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:24:21.267222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.267232 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.267247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:24:21.267259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.267269 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.267279 | orchestrator | 2025-09-19 07:24:21.267289 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-19 07:24:21.267299 | orchestrator | Friday 19 September 2025 07:18:37 +0000 (0:00:00.810) 0:03:17.964 ****** 2025-09-19 07:24:21.267316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.267334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.267350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.267367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.267378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.267395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.267405 | orchestrator | 2025-09-19 07:24:21.267416 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-19 07:24:21.267426 | orchestrator | Friday 19 September 2025 07:18:41 +0000 (0:00:03.330) 0:03:21.295 ****** 2025-09-19 07:24:21.267437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.267452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.267471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.267488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.267498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.267509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.267519 | orchestrator | 2025-09-19 07:24:21.267529 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-19 07:24:21.267540 | orchestrator | Friday 19 September 2025 07:18:51 +0000 (0:00:10.647) 0:03:31.942 ****** 2025-09-19 07:24:21.267560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:24:21.267578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.267588 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.267599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:24:21.267610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.267620 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.267635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:24:21.267654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.267672 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.267682 | orchestrator | 2025-09-19 07:24:21.267693 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-19 07:24:21.267703 | orchestrator | Friday 19 September 2025 07:18:52 +0000 (0:00:00.839) 0:03:32.782 ****** 2025-09-19 07:24:21.267713 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.267723 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:21.267733 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:21.267743 | orchestrator | 2025-09-19 07:24:21.267753 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-19 07:24:21.267763 | orchestrator | Friday 19 September 2025 07:18:54 +0000 (0:00:01.839) 0:03:34.622 ****** 2025-09-19 07:24:21.267772 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.267782 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.267792 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.267802 | orchestrator | 2025-09-19 07:24:21.267812 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-19 07:24:21.267822 | orchestrator | Friday 19 September 2025 07:18:55 +0000 (0:00:00.878) 0:03:35.500 ****** 2025-09-19 07:24:21.267832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.267851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.267880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:21.267891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.267902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.267912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.267923 | orchestrator | 2025-09-19 07:24:21.267933 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 07:24:21.267943 | orchestrator | Friday 19 September 2025 07:18:57 +0000 (0:00:02.271) 0:03:37.772 ****** 2025-09-19 07:24:21.267953 | orchestrator | 2025-09-19 07:24:21.267963 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 07:24:21.267973 | orchestrator | Friday 19 September 2025 07:18:57 +0000 (0:00:00.240) 0:03:38.012 ****** 2025-09-19 07:24:21.267982 | orchestrator | 2025-09-19 07:24:21.267992 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 07:24:21.268006 | orchestrator | Friday 19 September 2025 07:18:58 +0000 (0:00:00.420) 0:03:38.433 ****** 2025-09-19 07:24:21.268023 | orchestrator | 2025-09-19 07:24:21.268033 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-19 07:24:21.268042 | orchestrator | Friday 19 September 2025 07:18:58 +0000 (0:00:00.382) 0:03:38.816 ****** 2025-09-19 07:24:21.268053 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.268063 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:21.268072 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:21.268082 | orchestrator | 2025-09-19 07:24:21.268092 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-19 07:24:21.268102 | orchestrator | Friday 19 September 2025 07:19:23 +0000 (0:00:24.698) 0:04:03.514 ****** 2025-09-19 07:24:21.268112 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.268122 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:21.268132 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:21.268157 | orchestrator | 2025-09-19 07:24:21.268168 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-19 07:24:21.268178 | orchestrator | 2025-09-19 07:24:21.268188 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 07:24:21.268198 | orchestrator | Friday 19 September 2025 07:19:31 +0000 (0:00:07.770) 0:04:11.284 ****** 2025-09-19 07:24:21.268208 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:21.268219 | orchestrator | 2025-09-19 07:24:21.268235 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 07:24:21.268245 | orchestrator | Friday 19 September 2025 07:19:32 +0000 (0:00:01.296) 0:04:12.581 ****** 2025-09-19 07:24:21.268255 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.268265 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.268275 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.268284 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.268294 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.268304 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.268314 | orchestrator | 2025-09-19 07:24:21.268324 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-19 07:24:21.268334 | orchestrator | Friday 19 September 2025 07:19:34 +0000 (0:00:01.674) 0:04:14.256 ****** 2025-09-19 07:24:21.268343 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.268353 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.268363 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.268373 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:24:21.268383 | orchestrator | 2025-09-19 07:24:21.268393 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 07:24:21.268403 | orchestrator | Friday 19 September 2025 07:19:35 +0000 (0:00:01.124) 0:04:15.380 ****** 2025-09-19 07:24:21.268413 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-19 07:24:21.268423 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-19 07:24:21.268433 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-19 07:24:21.268443 | orchestrator | 2025-09-19 07:24:21.268453 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 07:24:21.268463 | orchestrator | Friday 19 September 2025 07:19:36 +0000 (0:00:01.016) 0:04:16.397 ****** 2025-09-19 07:24:21.268473 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-19 07:24:21.268483 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-19 07:24:21.268492 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-19 07:24:21.268502 | orchestrator | 2025-09-19 07:24:21.268512 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 07:24:21.268522 | orchestrator | Friday 19 September 2025 07:19:38 +0000 (0:00:01.627) 0:04:18.024 ****** 2025-09-19 07:24:21.268532 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-19 07:24:21.268542 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.268557 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-19 07:24:21.268567 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.268577 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-19 07:24:21.268587 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.268597 | orchestrator | 2025-09-19 07:24:21.268607 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-19 07:24:21.268617 | orchestrator | Friday 19 September 2025 07:19:38 +0000 (0:00:00.504) 0:04:18.528 ****** 2025-09-19 07:24:21.268627 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 07:24:21.268637 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 07:24:21.268647 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:24:21.268657 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:24:21.268666 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 07:24:21.268676 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.268686 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:24:21.268696 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:24:21.268706 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 07:24:21.268716 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 07:24:21.268726 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.268736 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:24:21.268746 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:24:21.268760 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.268771 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 07:24:21.268780 | orchestrator | 2025-09-19 07:24:21.268790 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-19 07:24:21.268800 | orchestrator | Friday 19 September 2025 07:19:39 +0000 (0:00:01.449) 0:04:19.977 ****** 2025-09-19 07:24:21.268810 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.268820 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.268830 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.268840 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.268849 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.268859 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.268869 | orchestrator | 2025-09-19 07:24:21.268879 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-19 07:24:21.268889 | orchestrator | Friday 19 September 2025 07:19:41 +0000 (0:00:01.291) 0:04:21.269 ****** 2025-09-19 07:24:21.268899 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.268909 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.268919 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.268929 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.268939 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.268949 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.268959 | orchestrator | 2025-09-19 07:24:21.268969 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 07:24:21.268979 | orchestrator | Friday 19 September 2025 07:19:43 +0000 (0:00:02.282) 0:04:23.551 ****** 2025-09-19 07:24:21.268996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269015 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269110 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269166 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269216 | orchestrator | 2025-09-19 07:24:21.269226 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 07:24:21.269236 | orchestrator | Friday 19 September 2025 07:19:47 +0000 (0:00:04.045) 0:04:27.597 ****** 2025-09-19 07:24:21.269247 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:21.269257 | orchestrator | 2025-09-19 07:24:21.269267 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 07:24:21.269277 | orchestrator | Friday 19 September 2025 07:19:49 +0000 (0:00:01.979) 0:04:29.577 ****** 2025-09-19 07:24:21.269291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269398 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.269501 | orchestrator | 2025-09-19 07:24:21.269511 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 07:24:21.269522 | orchestrator | Friday 19 September 2025 07:19:53 +0000 (0:00:04.124) 0:04:33.701 ****** 2025-09-19 07:24:21.269532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.269544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.269558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.269580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.269591 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.269602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.269612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.269622 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.269721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.269739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.269761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.269771 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.269782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:24:21.269792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.269802 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.269813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:24:21.269829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.269840 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.269854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:24:21.269871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.269881 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.269891 | orchestrator | 2025-09-19 07:24:21.269901 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 07:24:21.269911 | orchestrator | Friday 19 September 2025 07:19:57 +0000 (0:00:03.685) 0:04:37.387 ****** 2025-09-19 07:24:21.269922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.269933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.269948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.269958 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.269973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.269990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.270001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.270011 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.270050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:24:21.270061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.270071 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.270089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.270111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.270122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.270132 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.270196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:24:21.270209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.270219 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.270230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:24:21.270246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.270264 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.270274 | orchestrator | 2025-09-19 07:24:21.270284 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 07:24:21.270294 | orchestrator | Friday 19 September 2025 07:20:01 +0000 (0:00:04.238) 0:04:41.625 ****** 2025-09-19 07:24:21.270305 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.270314 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.270324 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.270334 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:24:21.270345 | orchestrator | 2025-09-19 07:24:21.270355 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-19 07:24:21.270364 | orchestrator | Friday 19 September 2025 07:20:03 +0000 (0:00:01.689) 0:04:43.315 ****** 2025-09-19 07:24:21.270375 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 07:24:21.270389 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 07:24:21.270400 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 07:24:21.270410 | orchestrator | 2025-09-19 07:24:21.270419 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-19 07:24:21.270430 | orchestrator | Friday 19 September 2025 07:20:04 +0000 (0:00:01.651) 0:04:44.966 ****** 2025-09-19 07:24:21.270439 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 07:24:21.270449 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 07:24:21.270459 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 07:24:21.270469 | orchestrator | 2025-09-19 07:24:21.270479 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-19 07:24:21.270489 | orchestrator | Friday 19 September 2025 07:20:07 +0000 (0:00:02.338) 0:04:47.305 ****** 2025-09-19 07:24:21.270499 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:24:21.270509 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:24:21.270519 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:24:21.270529 | orchestrator | 2025-09-19 07:24:21.270539 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-19 07:24:21.270549 | orchestrator | Friday 19 September 2025 07:20:08 +0000 (0:00:00.948) 0:04:48.254 ****** 2025-09-19 07:24:21.270559 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:24:21.270569 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:24:21.270579 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:24:21.270589 | orchestrator | 2025-09-19 07:24:21.270599 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-19 07:24:21.270609 | orchestrator | Friday 19 September 2025 07:20:08 +0000 (0:00:00.481) 0:04:48.736 ****** 2025-09-19 07:24:21.270619 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 07:24:21.270629 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 07:24:21.270639 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 07:24:21.270647 | orchestrator | 2025-09-19 07:24:21.270655 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-19 07:24:21.270663 | orchestrator | Friday 19 September 2025 07:20:09 +0000 (0:00:01.200) 0:04:49.936 ****** 2025-09-19 07:24:21.270671 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 07:24:21.270680 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 07:24:21.270688 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 07:24:21.270696 | orchestrator | 2025-09-19 07:24:21.270704 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-19 07:24:21.270713 | orchestrator | Friday 19 September 2025 07:20:11 +0000 (0:00:01.325) 0:04:51.262 ****** 2025-09-19 07:24:21.270727 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 07:24:21.270736 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 07:24:21.270744 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 07:24:21.270752 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-19 07:24:21.270760 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-19 07:24:21.270768 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-19 07:24:21.270776 | orchestrator | 2025-09-19 07:24:21.270784 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-19 07:24:21.270792 | orchestrator | Friday 19 September 2025 07:20:15 +0000 (0:00:04.151) 0:04:55.414 ****** 2025-09-19 07:24:21.270801 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.270809 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.270817 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.270825 | orchestrator | 2025-09-19 07:24:21.270833 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-19 07:24:21.270841 | orchestrator | Friday 19 September 2025 07:20:15 +0000 (0:00:00.287) 0:04:55.701 ****** 2025-09-19 07:24:21.270849 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.270857 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.270866 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.270874 | orchestrator | 2025-09-19 07:24:21.270882 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-19 07:24:21.270890 | orchestrator | Friday 19 September 2025 07:20:16 +0000 (0:00:00.332) 0:04:56.034 ****** 2025-09-19 07:24:21.270898 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.270906 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.270914 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.270922 | orchestrator | 2025-09-19 07:24:21.270934 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-19 07:24:21.270943 | orchestrator | Friday 19 September 2025 07:20:18 +0000 (0:00:02.574) 0:04:58.609 ****** 2025-09-19 07:24:21.270951 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 07:24:21.270960 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 07:24:21.270968 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 07:24:21.270977 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 07:24:21.270985 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 07:24:21.270993 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 07:24:21.271001 | orchestrator | 2025-09-19 07:24:21.271009 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-19 07:24:21.271017 | orchestrator | Friday 19 September 2025 07:20:22 +0000 (0:00:03.559) 0:05:02.168 ****** 2025-09-19 07:24:21.271031 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:24:21.271040 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:24:21.271048 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:24:21.271056 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:24:21.271065 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.271073 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:24:21.271081 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.271089 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:24:21.271097 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.271110 | orchestrator | 2025-09-19 07:24:21.271118 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-19 07:24:21.271126 | orchestrator | Friday 19 September 2025 07:20:25 +0000 (0:00:03.381) 0:05:05.550 ****** 2025-09-19 07:24:21.271134 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.271156 | orchestrator | 2025-09-19 07:24:21.271164 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-19 07:24:21.271173 | orchestrator | Friday 19 September 2025 07:20:25 +0000 (0:00:00.133) 0:05:05.684 ****** 2025-09-19 07:24:21.271181 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.271189 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.271197 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.271205 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.271213 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.271221 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.271229 | orchestrator | 2025-09-19 07:24:21.271238 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-19 07:24:21.271246 | orchestrator | Friday 19 September 2025 07:20:26 +0000 (0:00:00.800) 0:05:06.485 ****** 2025-09-19 07:24:21.271254 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 07:24:21.271262 | orchestrator | 2025-09-19 07:24:21.271270 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-19 07:24:21.271278 | orchestrator | Friday 19 September 2025 07:20:27 +0000 (0:00:00.709) 0:05:07.194 ****** 2025-09-19 07:24:21.271286 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.271294 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.271303 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.271311 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.271319 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.271327 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.271335 | orchestrator | 2025-09-19 07:24:21.271343 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-19 07:24:21.271351 | orchestrator | Friday 19 September 2025 07:20:27 +0000 (0:00:00.575) 0:05:07.769 ****** 2025-09-19 07:24:21.271359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271507 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271529 | orchestrator | 2025-09-19 07:24:21.271541 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-19 07:24:21.271550 | orchestrator | Friday 19 September 2025 07:20:32 +0000 (0:00:04.598) 0:05:12.368 ****** 2025-09-19 07:24:21.271558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.271567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.271576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.271588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.271602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.271614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.271622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.271712 | orchestrator | 2025-09-19 07:24:21.271720 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-19 07:24:21.271729 | orchestrator | Friday 19 September 2025 07:20:39 +0000 (0:00:07.461) 0:05:19.830 ****** 2025-09-19 07:24:21.271744 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.271752 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.271760 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.271768 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.271776 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.271784 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.271792 | orchestrator | 2025-09-19 07:24:21.271804 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-19 07:24:21.271812 | orchestrator | Friday 19 September 2025 07:20:41 +0000 (0:00:01.902) 0:05:21.732 ****** 2025-09-19 07:24:21.271820 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 07:24:21.271828 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 07:24:21.271836 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 07:24:21.271844 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 07:24:21.271852 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 07:24:21.271861 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 07:24:21.271869 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.271877 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 07:24:21.271885 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 07:24:21.271893 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.271901 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 07:24:21.271909 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.271921 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 07:24:21.271929 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 07:24:21.271937 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 07:24:21.271945 | orchestrator | 2025-09-19 07:24:21.271953 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-19 07:24:21.271961 | orchestrator | Friday 19 September 2025 07:20:45 +0000 (0:00:03.813) 0:05:25.546 ****** 2025-09-19 07:24:21.271969 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.271977 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.271985 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.271993 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.272001 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.272009 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.272017 | orchestrator | 2025-09-19 07:24:21.272025 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-19 07:24:21.272033 | orchestrator | Friday 19 September 2025 07:20:46 +0000 (0:00:00.651) 0:05:26.197 ****** 2025-09-19 07:24:21.272042 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 07:24:21.272050 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 07:24:21.272058 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 07:24:21.272066 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 07:24:21.272074 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 07:24:21.272082 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 07:24:21.272096 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 07:24:21.272104 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 07:24:21.272112 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 07:24:21.272120 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 07:24:21.272128 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.272136 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 07:24:21.272157 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.272166 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 07:24:21.272174 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.272182 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:24:21.272190 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:24:21.272198 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:24:21.272206 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:24:21.272214 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:24:21.272227 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:24:21.272235 | orchestrator | 2025-09-19 07:24:21.272243 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-19 07:24:21.272251 | orchestrator | Friday 19 September 2025 07:20:50 +0000 (0:00:04.687) 0:05:30.885 ****** 2025-09-19 07:24:21.272260 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:24:21.272268 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:24:21.272276 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:24:21.272283 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 07:24:21.272292 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:24:21.272299 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:24:21.272307 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 07:24:21.272315 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 07:24:21.272323 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:24:21.272337 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:24:21.272346 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:24:21.272354 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:24:21.272361 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 07:24:21.272369 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.272378 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 07:24:21.272391 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.272399 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 07:24:21.272407 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.272415 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:24:21.272423 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:24:21.272431 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:24:21.272439 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:24:21.272447 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:24:21.272455 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:24:21.272463 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:24:21.272471 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:24:21.272479 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:24:21.272487 | orchestrator | 2025-09-19 07:24:21.272495 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-19 07:24:21.272504 | orchestrator | Friday 19 September 2025 07:20:57 +0000 (0:00:06.365) 0:05:37.251 ****** 2025-09-19 07:24:21.272512 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.272520 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.272528 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.272536 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.272544 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.272552 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.272560 | orchestrator | 2025-09-19 07:24:21.272568 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-19 07:24:21.272577 | orchestrator | Friday 19 September 2025 07:20:57 +0000 (0:00:00.537) 0:05:37.788 ****** 2025-09-19 07:24:21.272585 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.272593 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.272601 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.272609 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.272617 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.272625 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.272633 | orchestrator | 2025-09-19 07:24:21.272641 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-19 07:24:21.272649 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.650) 0:05:38.439 ****** 2025-09-19 07:24:21.272657 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.272665 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.272673 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.272682 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.272690 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.272698 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.272706 | orchestrator | 2025-09-19 07:24:21.272714 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-19 07:24:21.272722 | orchestrator | Friday 19 September 2025 07:21:01 +0000 (0:00:03.067) 0:05:41.506 ****** 2025-09-19 07:24:21.272736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.272753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.272761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.272770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.272779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.272792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.272806 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.272814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:24:21.272827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.272836 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.272844 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.272853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:24:21.272861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:24:21.272869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.272878 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.272891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:24:21.272904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.272912 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.272924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:24:21.272933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:24:21.272941 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.272950 | orchestrator | 2025-09-19 07:24:21.272958 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-19 07:24:21.272966 | orchestrator | Friday 19 September 2025 07:21:05 +0000 (0:00:03.936) 0:05:45.443 ****** 2025-09-19 07:24:21.272974 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 07:24:21.272982 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 07:24:21.272991 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.272999 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 07:24:21.273007 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 07:24:21.273015 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.273023 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 07:24:21.273031 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 07:24:21.273039 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.273047 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 07:24:21.273055 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 07:24:21.273063 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.273071 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 07:24:21.273079 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 07:24:21.273092 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.273100 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 07:24:21.273108 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 07:24:21.273116 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.273124 | orchestrator | 2025-09-19 07:24:21.273132 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-19 07:24:21.273172 | orchestrator | Friday 19 September 2025 07:21:06 +0000 (0:00:00.913) 0:05:46.356 ****** 2025-09-19 07:24:21.273226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273241 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273334 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:24:21.273382 | orchestrator | 2025-09-19 07:24:21.273389 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 07:24:21.273396 | orchestrator | Friday 19 September 2025 07:21:10 +0000 (0:00:03.665) 0:05:50.021 ****** 2025-09-19 07:24:21.273403 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.273410 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.273417 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.273424 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.273431 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.273438 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.273444 | orchestrator | 2025-09-19 07:24:21.273451 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:24:21.273462 | orchestrator | Friday 19 September 2025 07:21:11 +0000 (0:00:01.101) 0:05:51.123 ****** 2025-09-19 07:24:21.273469 | orchestrator | 2025-09-19 07:24:21.273476 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:24:21.273483 | orchestrator | Friday 19 September 2025 07:21:11 +0000 (0:00:00.226) 0:05:51.349 ****** 2025-09-19 07:24:21.273490 | orchestrator | 2025-09-19 07:24:21.273497 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:24:21.273503 | orchestrator | Friday 19 September 2025 07:21:11 +0000 (0:00:00.233) 0:05:51.582 ****** 2025-09-19 07:24:21.273510 | orchestrator | 2025-09-19 07:24:21.273517 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:24:21.273524 | orchestrator | Friday 19 September 2025 07:21:11 +0000 (0:00:00.127) 0:05:51.710 ****** 2025-09-19 07:24:21.273531 | orchestrator | 2025-09-19 07:24:21.273538 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:24:21.273544 | orchestrator | Friday 19 September 2025 07:21:11 +0000 (0:00:00.117) 0:05:51.827 ****** 2025-09-19 07:24:21.273551 | orchestrator | 2025-09-19 07:24:21.273558 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:24:21.273565 | orchestrator | Friday 19 September 2025 07:21:11 +0000 (0:00:00.120) 0:05:51.948 ****** 2025-09-19 07:24:21.273572 | orchestrator | 2025-09-19 07:24:21.273579 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-19 07:24:21.273585 | orchestrator | Friday 19 September 2025 07:21:12 +0000 (0:00:00.135) 0:05:52.083 ****** 2025-09-19 07:24:21.273592 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.273599 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:21.273606 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:21.273613 | orchestrator | 2025-09-19 07:24:21.273620 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-19 07:24:21.273626 | orchestrator | Friday 19 September 2025 07:21:19 +0000 (0:00:07.780) 0:05:59.864 ****** 2025-09-19 07:24:21.273633 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.273640 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:21.273647 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:21.273654 | orchestrator | 2025-09-19 07:24:21.273661 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-19 07:24:21.273668 | orchestrator | Friday 19 September 2025 07:21:32 +0000 (0:00:12.362) 0:06:12.227 ****** 2025-09-19 07:24:21.273675 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.273685 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.273692 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.273699 | orchestrator | 2025-09-19 07:24:21.273706 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-19 07:24:21.273713 | orchestrator | Friday 19 September 2025 07:21:56 +0000 (0:00:24.024) 0:06:36.251 ****** 2025-09-19 07:24:21.273720 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.273726 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.273733 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.273740 | orchestrator | 2025-09-19 07:24:21.273747 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-19 07:24:21.273754 | orchestrator | Friday 19 September 2025 07:22:32 +0000 (0:00:36.366) 0:07:12.618 ****** 2025-09-19 07:24:21.273761 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-19 07:24:21.273768 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-19 07:24:21.273775 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-19 07:24:21.273782 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.273788 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.273795 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.273802 | orchestrator | 2025-09-19 07:24:21.273809 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-19 07:24:21.273822 | orchestrator | Friday 19 September 2025 07:22:38 +0000 (0:00:06.356) 0:07:18.974 ****** 2025-09-19 07:24:21.273829 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.273836 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.273843 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.273849 | orchestrator | 2025-09-19 07:24:21.273885 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-19 07:24:21.273893 | orchestrator | Friday 19 September 2025 07:22:40 +0000 (0:00:01.054) 0:07:20.029 ****** 2025-09-19 07:24:21.273900 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:24:21.273907 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:24:21.273914 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:24:21.273920 | orchestrator | 2025-09-19 07:24:21.273927 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-19 07:24:21.273934 | orchestrator | Friday 19 September 2025 07:23:09 +0000 (0:00:29.138) 0:07:49.167 ****** 2025-09-19 07:24:21.273941 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.273948 | orchestrator | 2025-09-19 07:24:21.273955 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-19 07:24:21.273962 | orchestrator | Friday 19 September 2025 07:23:09 +0000 (0:00:00.129) 0:07:49.297 ****** 2025-09-19 07:24:21.273969 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.273976 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.273983 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.273989 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.273996 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.274004 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-19 07:24:21.274011 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:24:21.274050 | orchestrator | 2025-09-19 07:24:21.274058 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-19 07:24:21.274065 | orchestrator | Friday 19 September 2025 07:23:30 +0000 (0:00:21.329) 0:08:10.627 ****** 2025-09-19 07:24:21.274072 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.274080 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.274087 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.274093 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.274100 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.274107 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.274114 | orchestrator | 2025-09-19 07:24:21.274121 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-19 07:24:21.274128 | orchestrator | Friday 19 September 2025 07:23:40 +0000 (0:00:10.180) 0:08:20.808 ****** 2025-09-19 07:24:21.274135 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.274154 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.274161 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.274168 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.274180 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.274191 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-09-19 07:24:21.274206 | orchestrator | 2025-09-19 07:24:21.274225 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 07:24:21.274236 | orchestrator | Friday 19 September 2025 07:23:45 +0000 (0:00:04.511) 0:08:25.319 ****** 2025-09-19 07:24:21.274246 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:24:21.274257 | orchestrator | 2025-09-19 07:24:21.274268 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 07:24:21.274279 | orchestrator | Friday 19 September 2025 07:23:58 +0000 (0:00:12.917) 0:08:38.237 ****** 2025-09-19 07:24:21.274290 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:24:21.274299 | orchestrator | 2025-09-19 07:24:21.274310 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-19 07:24:21.274328 | orchestrator | Friday 19 September 2025 07:23:59 +0000 (0:00:01.319) 0:08:39.556 ****** 2025-09-19 07:24:21.274340 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.274352 | orchestrator | 2025-09-19 07:24:21.274363 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-19 07:24:21.274376 | orchestrator | Friday 19 September 2025 07:24:00 +0000 (0:00:01.367) 0:08:40.924 ****** 2025-09-19 07:24:21.274388 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:24:21.274395 | orchestrator | 2025-09-19 07:24:21.274402 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-19 07:24:21.274415 | orchestrator | Friday 19 September 2025 07:24:11 +0000 (0:00:10.959) 0:08:51.883 ****** 2025-09-19 07:24:21.274423 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:24:21.274429 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:24:21.274436 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:21.274443 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:24:21.274450 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:24:21.274457 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:24:21.274464 | orchestrator | 2025-09-19 07:24:21.274471 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-19 07:24:21.274478 | orchestrator | 2025-09-19 07:24:21.274485 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-19 07:24:21.274492 | orchestrator | Friday 19 September 2025 07:24:13 +0000 (0:00:01.871) 0:08:53.754 ****** 2025-09-19 07:24:21.274498 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:21.274505 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:21.274512 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:21.274519 | orchestrator | 2025-09-19 07:24:21.274526 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-19 07:24:21.274533 | orchestrator | 2025-09-19 07:24:21.274540 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-19 07:24:21.274547 | orchestrator | Friday 19 September 2025 07:24:14 +0000 (0:00:00.994) 0:08:54.748 ****** 2025-09-19 07:24:21.274554 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.274561 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.274567 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.274574 | orchestrator | 2025-09-19 07:24:21.274581 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-19 07:24:21.274588 | orchestrator | 2025-09-19 07:24:21.274595 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-19 07:24:21.274606 | orchestrator | Friday 19 September 2025 07:24:15 +0000 (0:00:00.611) 0:08:55.360 ****** 2025-09-19 07:24:21.274614 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-19 07:24:21.274621 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 07:24:21.274627 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 07:24:21.274634 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-19 07:24:21.274641 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-19 07:24:21.274648 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-19 07:24:21.274655 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:24:21.274661 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-19 07:24:21.274668 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 07:24:21.274675 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 07:24:21.274682 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-19 07:24:21.274689 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-19 07:24:21.274696 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-19 07:24:21.274703 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:24:21.274710 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-19 07:24:21.274723 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 07:24:21.274730 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 07:24:21.274736 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-19 07:24:21.274743 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-19 07:24:21.274750 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-19 07:24:21.274757 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:24:21.274764 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-19 07:24:21.274771 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 07:24:21.274777 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 07:24:21.274784 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-19 07:24:21.274791 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-19 07:24:21.274798 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-19 07:24:21.274805 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.274812 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-19 07:24:21.274819 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 07:24:21.274825 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 07:24:21.274832 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-19 07:24:21.274839 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-19 07:24:21.274846 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-19 07:24:21.274853 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.274859 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-19 07:24:21.274866 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 07:24:21.274873 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 07:24:21.274880 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-19 07:24:21.274887 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-19 07:24:21.274894 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-19 07:24:21.274900 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.274907 | orchestrator | 2025-09-19 07:24:21.274914 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-19 07:24:21.274921 | orchestrator | 2025-09-19 07:24:21.274928 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-19 07:24:21.274935 | orchestrator | Friday 19 September 2025 07:24:16 +0000 (0:00:01.179) 0:08:56.540 ****** 2025-09-19 07:24:21.274941 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-19 07:24:21.274952 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-19 07:24:21.274959 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.274966 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-19 07:24:21.274973 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-19 07:24:21.274980 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.274987 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-19 07:24:21.274994 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-19 07:24:21.275001 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.275008 | orchestrator | 2025-09-19 07:24:21.275014 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-19 07:24:21.275021 | orchestrator | 2025-09-19 07:24:21.275028 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-19 07:24:21.275035 | orchestrator | Friday 19 September 2025 07:24:16 +0000 (0:00:00.468) 0:08:57.008 ****** 2025-09-19 07:24:21.275042 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.275054 | orchestrator | 2025-09-19 07:24:21.275061 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-19 07:24:21.275067 | orchestrator | 2025-09-19 07:24:21.275074 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-19 07:24:21.275081 | orchestrator | Friday 19 September 2025 07:24:17 +0000 (0:00:00.757) 0:08:57.765 ****** 2025-09-19 07:24:21.275088 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:21.275095 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:21.275102 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:21.275108 | orchestrator | 2025-09-19 07:24:21.275115 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:24:21.275125 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:24:21.275134 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-19 07:24:21.275179 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 07:24:21.275188 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 07:24:21.275195 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 07:24:21.275202 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-19 07:24:21.275209 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-19 07:24:21.275215 | orchestrator | 2025-09-19 07:24:21.275222 | orchestrator | 2025-09-19 07:24:21.275229 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:24:21.275236 | orchestrator | Friday 19 September 2025 07:24:18 +0000 (0:00:00.400) 0:08:58.166 ****** 2025-09-19 07:24:21.275243 | orchestrator | =============================================================================== 2025-09-19 07:24:21.275250 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.37s 2025-09-19 07:24:21.275257 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.14s 2025-09-19 07:24:21.275264 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 27.50s 2025-09-19 07:24:21.275271 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.70s 2025-09-19 07:24:21.275277 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.02s 2025-09-19 07:24:21.275284 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.33s 2025-09-19 07:24:21.275291 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.66s 2025-09-19 07:24:21.275298 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.04s 2025-09-19 07:24:21.275305 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.34s 2025-09-19 07:24:21.275311 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.92s 2025-09-19 07:24:21.275318 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.85s 2025-09-19 07:24:21.275325 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.36s 2025-09-19 07:24:21.275332 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.36s 2025-09-19 07:24:21.275339 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.20s 2025-09-19 07:24:21.275346 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.96s 2025-09-19 07:24:21.275358 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.65s 2025-09-19 07:24:21.275365 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.18s 2025-09-19 07:24:21.275372 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.37s 2025-09-19 07:24:21.275379 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.78s 2025-09-19 07:24:21.275386 | orchestrator | nova : Restart nova-api container --------------------------------------- 7.77s 2025-09-19 07:24:21.275397 | orchestrator | 2025-09-19 07:24:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:24.300257 | orchestrator | 2025-09-19 07:24:24 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:24.300362 | orchestrator | 2025-09-19 07:24:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:27.334434 | orchestrator | 2025-09-19 07:24:27 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:27.334534 | orchestrator | 2025-09-19 07:24:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:30.381502 | orchestrator | 2025-09-19 07:24:30 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:30.381581 | orchestrator | 2025-09-19 07:24:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:33.420212 | orchestrator | 2025-09-19 07:24:33 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:33.420315 | orchestrator | 2025-09-19 07:24:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:36.461265 | orchestrator | 2025-09-19 07:24:36 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:36.461410 | orchestrator | 2025-09-19 07:24:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:39.509467 | orchestrator | 2025-09-19 07:24:39 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:39.509544 | orchestrator | 2025-09-19 07:24:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:42.550790 | orchestrator | 2025-09-19 07:24:42 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:42.550894 | orchestrator | 2025-09-19 07:24:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:45.582719 | orchestrator | 2025-09-19 07:24:45 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:45.582820 | orchestrator | 2025-09-19 07:24:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:48.622009 | orchestrator | 2025-09-19 07:24:48 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:48.622166 | orchestrator | 2025-09-19 07:24:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:51.666913 | orchestrator | 2025-09-19 07:24:51 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:51.667022 | orchestrator | 2025-09-19 07:24:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:54.702541 | orchestrator | 2025-09-19 07:24:54 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:54.702644 | orchestrator | 2025-09-19 07:24:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:57.745128 | orchestrator | 2025-09-19 07:24:57 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:24:57.745263 | orchestrator | 2025-09-19 07:24:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:00.787680 | orchestrator | 2025-09-19 07:25:00 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:00.787797 | orchestrator | 2025-09-19 07:25:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:03.829642 | orchestrator | 2025-09-19 07:25:03 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:03.829742 | orchestrator | 2025-09-19 07:25:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:06.866961 | orchestrator | 2025-09-19 07:25:06 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:06.867049 | orchestrator | 2025-09-19 07:25:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:09.909317 | orchestrator | 2025-09-19 07:25:09 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:09.909388 | orchestrator | 2025-09-19 07:25:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:12.980004 | orchestrator | 2025-09-19 07:25:12 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:12.980111 | orchestrator | 2025-09-19 07:25:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:16.015376 | orchestrator | 2025-09-19 07:25:16 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:16.015676 | orchestrator | 2025-09-19 07:25:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:19.055603 | orchestrator | 2025-09-19 07:25:19 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:19.055693 | orchestrator | 2025-09-19 07:25:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:22.091167 | orchestrator | 2025-09-19 07:25:22 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:22.091319 | orchestrator | 2025-09-19 07:25:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:25.129861 | orchestrator | 2025-09-19 07:25:25 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:25.129967 | orchestrator | 2025-09-19 07:25:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:28.177784 | orchestrator | 2025-09-19 07:25:28 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:28.177884 | orchestrator | 2025-09-19 07:25:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:31.219805 | orchestrator | 2025-09-19 07:25:31 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:31.219913 | orchestrator | 2025-09-19 07:25:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:34.262383 | orchestrator | 2025-09-19 07:25:34 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:34.262497 | orchestrator | 2025-09-19 07:25:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:37.313150 | orchestrator | 2025-09-19 07:25:37 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:37.313307 | orchestrator | 2025-09-19 07:25:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:40.356500 | orchestrator | 2025-09-19 07:25:40 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:40.356583 | orchestrator | 2025-09-19 07:25:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:43.398447 | orchestrator | 2025-09-19 07:25:43 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:43.398553 | orchestrator | 2025-09-19 07:25:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:46.436500 | orchestrator | 2025-09-19 07:25:46 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:46.436604 | orchestrator | 2025-09-19 07:25:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:49.479120 | orchestrator | 2025-09-19 07:25:49 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:49.479229 | orchestrator | 2025-09-19 07:25:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:52.531870 | orchestrator | 2025-09-19 07:25:52 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:52.531976 | orchestrator | 2025-09-19 07:25:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:55.578159 | orchestrator | 2025-09-19 07:25:55 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:55.578334 | orchestrator | 2025-09-19 07:25:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:58.618386 | orchestrator | 2025-09-19 07:25:58 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:25:58.618482 | orchestrator | 2025-09-19 07:25:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:01.656489 | orchestrator | 2025-09-19 07:26:01 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:01.656594 | orchestrator | 2025-09-19 07:26:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:04.700620 | orchestrator | 2025-09-19 07:26:04 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:04.700721 | orchestrator | 2025-09-19 07:26:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:07.746909 | orchestrator | 2025-09-19 07:26:07 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:07.747011 | orchestrator | 2025-09-19 07:26:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:10.790955 | orchestrator | 2025-09-19 07:26:10 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:10.791041 | orchestrator | 2025-09-19 07:26:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:13.835261 | orchestrator | 2025-09-19 07:26:13 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:13.835401 | orchestrator | 2025-09-19 07:26:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:16.877037 | orchestrator | 2025-09-19 07:26:16 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:16.877125 | orchestrator | 2025-09-19 07:26:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:19.917933 | orchestrator | 2025-09-19 07:26:19 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:19.918098 | orchestrator | 2025-09-19 07:26:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:22.964870 | orchestrator | 2025-09-19 07:26:22 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:22.964990 | orchestrator | 2025-09-19 07:26:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:26.009378 | orchestrator | 2025-09-19 07:26:26 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:26.009486 | orchestrator | 2025-09-19 07:26:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:29.055797 | orchestrator | 2025-09-19 07:26:29 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:29.055892 | orchestrator | 2025-09-19 07:26:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:32.104504 | orchestrator | 2025-09-19 07:26:32 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:32.104651 | orchestrator | 2025-09-19 07:26:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:35.140710 | orchestrator | 2025-09-19 07:26:35 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:35.140810 | orchestrator | 2025-09-19 07:26:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:38.185051 | orchestrator | 2025-09-19 07:26:38 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:38.185125 | orchestrator | 2025-09-19 07:26:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:41.226839 | orchestrator | 2025-09-19 07:26:41 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:41.226931 | orchestrator | 2025-09-19 07:26:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:44.269879 | orchestrator | 2025-09-19 07:26:44 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:44.270083 | orchestrator | 2025-09-19 07:26:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:47.314827 | orchestrator | 2025-09-19 07:26:47 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:47.314925 | orchestrator | 2025-09-19 07:26:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:50.357295 | orchestrator | 2025-09-19 07:26:50 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state STARTED 2025-09-19 07:26:50.357449 | orchestrator | 2025-09-19 07:26:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:53.405284 | orchestrator | 2025-09-19 07:26:53 | INFO  | Task 1e23eba2-3210-495b-8e2f-05b25d4173c2 is in state SUCCESS 2025-09-19 07:26:53.407031 | orchestrator | 2025-09-19 07:26:53.407078 | orchestrator | 2025-09-19 07:26:53.407086 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:26:53.407094 | orchestrator | 2025-09-19 07:26:53.407100 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:26:53.407107 | orchestrator | Friday 19 September 2025 07:22:10 +0000 (0:00:00.248) 0:00:00.248 ****** 2025-09-19 07:26:53.407113 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:53.407120 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:26:53.407127 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:26:53.407132 | orchestrator | 2025-09-19 07:26:53.407139 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:26:53.407145 | orchestrator | Friday 19 September 2025 07:22:10 +0000 (0:00:00.300) 0:00:00.549 ****** 2025-09-19 07:26:53.407151 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-19 07:26:53.407157 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-19 07:26:53.407164 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-19 07:26:53.407170 | orchestrator | 2025-09-19 07:26:53.407176 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-19 07:26:53.407182 | orchestrator | 2025-09-19 07:26:53.407188 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:26:53.407194 | orchestrator | Friday 19 September 2025 07:22:10 +0000 (0:00:00.395) 0:00:00.944 ****** 2025-09-19 07:26:53.407200 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:53.407207 | orchestrator | 2025-09-19 07:26:53.407213 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-19 07:26:53.407219 | orchestrator | Friday 19 September 2025 07:22:11 +0000 (0:00:00.570) 0:00:01.515 ****** 2025-09-19 07:26:53.407226 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-19 07:26:53.407232 | orchestrator | 2025-09-19 07:26:53.407267 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-19 07:26:53.407295 | orchestrator | Friday 19 September 2025 07:22:14 +0000 (0:00:03.293) 0:00:04.808 ****** 2025-09-19 07:26:53.407302 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-19 07:26:53.407308 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-19 07:26:53.407329 | orchestrator | 2025-09-19 07:26:53.407336 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-19 07:26:53.407342 | orchestrator | Friday 19 September 2025 07:22:21 +0000 (0:00:06.560) 0:00:11.369 ****** 2025-09-19 07:26:53.407348 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:26:53.407354 | orchestrator | 2025-09-19 07:26:53.407360 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-19 07:26:53.407366 | orchestrator | Friday 19 September 2025 07:22:24 +0000 (0:00:03.593) 0:00:14.962 ****** 2025-09-19 07:26:53.407372 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:26:53.407379 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 07:26:53.407385 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 07:26:53.407391 | orchestrator | 2025-09-19 07:26:53.407397 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-19 07:26:53.407476 | orchestrator | Friday 19 September 2025 07:22:32 +0000 (0:00:07.987) 0:00:22.949 ****** 2025-09-19 07:26:53.407483 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:26:53.407489 | orchestrator | 2025-09-19 07:26:53.407495 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-19 07:26:53.407501 | orchestrator | Friday 19 September 2025 07:22:36 +0000 (0:00:03.616) 0:00:26.566 ****** 2025-09-19 07:26:53.407518 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 07:26:53.407524 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 07:26:53.407530 | orchestrator | 2025-09-19 07:26:53.407537 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-19 07:26:53.407543 | orchestrator | Friday 19 September 2025 07:22:44 +0000 (0:00:07.822) 0:00:34.388 ****** 2025-09-19 07:26:53.407549 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-19 07:26:53.407555 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-19 07:26:53.407561 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-19 07:26:53.407567 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-19 07:26:53.407573 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-19 07:26:53.407578 | orchestrator | 2025-09-19 07:26:53.407584 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:26:53.407590 | orchestrator | Friday 19 September 2025 07:23:00 +0000 (0:00:15.814) 0:00:50.203 ****** 2025-09-19 07:26:53.407597 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:53.407605 | orchestrator | 2025-09-19 07:26:53.407612 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-19 07:26:53.407619 | orchestrator | Friday 19 September 2025 07:23:00 +0000 (0:00:00.545) 0:00:50.748 ****** 2025-09-19 07:26:53.407625 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.407632 | orchestrator | 2025-09-19 07:26:53.408066 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-19 07:26:53.408088 | orchestrator | Friday 19 September 2025 07:23:05 +0000 (0:00:05.083) 0:00:55.831 ****** 2025-09-19 07:26:53.408097 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408106 | orchestrator | 2025-09-19 07:26:53.408114 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-19 07:26:53.408212 | orchestrator | Friday 19 September 2025 07:23:10 +0000 (0:00:04.432) 0:01:00.263 ****** 2025-09-19 07:26:53.408227 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:53.408251 | orchestrator | 2025-09-19 07:26:53.408260 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-19 07:26:53.408268 | orchestrator | Friday 19 September 2025 07:23:13 +0000 (0:00:03.373) 0:01:03.637 ****** 2025-09-19 07:26:53.408280 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-19 07:26:53.408294 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-19 07:26:53.408303 | orchestrator | 2025-09-19 07:26:53.408312 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-19 07:26:53.408339 | orchestrator | Friday 19 September 2025 07:23:24 +0000 (0:00:10.786) 0:01:14.424 ****** 2025-09-19 07:26:53.408349 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-19 07:26:53.408357 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-19 07:26:53.408368 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-19 07:26:53.408379 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-19 07:26:53.408390 | orchestrator | 2025-09-19 07:26:53.408399 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-19 07:26:53.408409 | orchestrator | Friday 19 September 2025 07:23:41 +0000 (0:00:17.544) 0:01:31.968 ****** 2025-09-19 07:26:53.408417 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408423 | orchestrator | 2025-09-19 07:26:53.408429 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-19 07:26:53.408436 | orchestrator | Friday 19 September 2025 07:23:46 +0000 (0:00:04.677) 0:01:36.645 ****** 2025-09-19 07:26:53.408442 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408448 | orchestrator | 2025-09-19 07:26:53.408454 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-19 07:26:53.408460 | orchestrator | Friday 19 September 2025 07:23:52 +0000 (0:00:06.229) 0:01:42.875 ****** 2025-09-19 07:26:53.408466 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:53.408472 | orchestrator | 2025-09-19 07:26:53.408478 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-19 07:26:53.408484 | orchestrator | Friday 19 September 2025 07:23:52 +0000 (0:00:00.222) 0:01:43.098 ****** 2025-09-19 07:26:53.408490 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408496 | orchestrator | 2025-09-19 07:26:53.408501 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:26:53.408507 | orchestrator | Friday 19 September 2025 07:23:57 +0000 (0:00:04.706) 0:01:47.804 ****** 2025-09-19 07:26:53.408513 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:53.408519 | orchestrator | 2025-09-19 07:26:53.408525 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-19 07:26:53.408531 | orchestrator | Friday 19 September 2025 07:23:58 +0000 (0:00:01.043) 0:01:48.848 ****** 2025-09-19 07:26:53.408537 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.408562 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.408569 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408575 | orchestrator | 2025-09-19 07:26:53.408585 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-19 07:26:53.408591 | orchestrator | Friday 19 September 2025 07:24:04 +0000 (0:00:05.472) 0:01:54.320 ****** 2025-09-19 07:26:53.408604 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.408610 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.408616 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408622 | orchestrator | 2025-09-19 07:26:53.408627 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-19 07:26:53.408640 | orchestrator | Friday 19 September 2025 07:24:08 +0000 (0:00:04.580) 0:01:58.901 ****** 2025-09-19 07:26:53.408646 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408652 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.408658 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.408664 | orchestrator | 2025-09-19 07:26:53.408670 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-19 07:26:53.408676 | orchestrator | Friday 19 September 2025 07:24:09 +0000 (0:00:00.809) 0:01:59.711 ****** 2025-09-19 07:26:53.408682 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:53.408688 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:26:53.408694 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:26:53.408699 | orchestrator | 2025-09-19 07:26:53.408705 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-19 07:26:53.408711 | orchestrator | Friday 19 September 2025 07:24:11 +0000 (0:00:02.038) 0:02:01.750 ****** 2025-09-19 07:26:53.408717 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.408723 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.408729 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408735 | orchestrator | 2025-09-19 07:26:53.408741 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-19 07:26:53.408750 | orchestrator | Friday 19 September 2025 07:24:13 +0000 (0:00:01.452) 0:02:03.202 ****** 2025-09-19 07:26:53.408764 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408775 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.408785 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.408794 | orchestrator | 2025-09-19 07:26:53.408803 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-19 07:26:53.408812 | orchestrator | Friday 19 September 2025 07:24:14 +0000 (0:00:01.271) 0:02:04.473 ****** 2025-09-19 07:26:53.408821 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.408830 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.408839 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408848 | orchestrator | 2025-09-19 07:26:53.408895 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-19 07:26:53.408907 | orchestrator | Friday 19 September 2025 07:24:16 +0000 (0:00:01.973) 0:02:06.447 ****** 2025-09-19 07:26:53.408917 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.408926 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.408933 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.408939 | orchestrator | 2025-09-19 07:26:53.408946 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-19 07:26:53.408953 | orchestrator | Friday 19 September 2025 07:24:17 +0000 (0:00:01.656) 0:02:08.103 ****** 2025-09-19 07:26:53.408959 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:53.408966 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:26:53.408972 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:26:53.408979 | orchestrator | 2025-09-19 07:26:53.408985 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-19 07:26:53.408992 | orchestrator | Friday 19 September 2025 07:24:18 +0000 (0:00:00.636) 0:02:08.740 ****** 2025-09-19 07:26:53.409002 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:26:53.409010 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:26:53.409017 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:53.409023 | orchestrator | 2025-09-19 07:26:53.409029 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:26:53.409035 | orchestrator | Friday 19 September 2025 07:24:21 +0000 (0:00:02.571) 0:02:11.312 ****** 2025-09-19 07:26:53.409041 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:53.409047 | orchestrator | 2025-09-19 07:26:53.409053 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-19 07:26:53.409059 | orchestrator | Friday 19 September 2025 07:24:21 +0000 (0:00:00.607) 0:02:11.920 ****** 2025-09-19 07:26:53.409065 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:53.409078 | orchestrator | 2025-09-19 07:26:53.409084 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-19 07:26:53.409090 | orchestrator | Friday 19 September 2025 07:24:25 +0000 (0:00:03.396) 0:02:15.316 ****** 2025-09-19 07:26:53.409095 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:53.409101 | orchestrator | 2025-09-19 07:26:53.409107 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-19 07:26:53.409113 | orchestrator | Friday 19 September 2025 07:24:28 +0000 (0:00:03.121) 0:02:18.438 ****** 2025-09-19 07:26:53.409119 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-19 07:26:53.409125 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-19 07:26:53.409131 | orchestrator | 2025-09-19 07:26:53.409137 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-19 07:26:53.409143 | orchestrator | Friday 19 September 2025 07:24:34 +0000 (0:00:06.573) 0:02:25.012 ****** 2025-09-19 07:26:53.409149 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:53.409155 | orchestrator | 2025-09-19 07:26:53.409161 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-19 07:26:53.409167 | orchestrator | Friday 19 September 2025 07:24:38 +0000 (0:00:03.306) 0:02:28.318 ****** 2025-09-19 07:26:53.409172 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:53.409178 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:26:53.409184 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:26:53.409190 | orchestrator | 2025-09-19 07:26:53.409196 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-19 07:26:53.409202 | orchestrator | Friday 19 September 2025 07:24:38 +0000 (0:00:00.308) 0:02:28.626 ****** 2025-09-19 07:26:53.409216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.409243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.409251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.409263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.409270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.409280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.409286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409432 | orchestrator | 2025-09-19 07:26:53.409438 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-19 07:26:53.409449 | orchestrator | Friday 19 September 2025 07:24:40 +0000 (0:00:02.547) 0:02:31.174 ****** 2025-09-19 07:26:53.409455 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:53.409461 | orchestrator | 2025-09-19 07:26:53.409467 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-19 07:26:53.409473 | orchestrator | Friday 19 September 2025 07:24:41 +0000 (0:00:00.132) 0:02:31.306 ****** 2025-09-19 07:26:53.409479 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:53.409485 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:53.409491 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:53.409497 | orchestrator | 2025-09-19 07:26:53.409503 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-19 07:26:53.409509 | orchestrator | Friday 19 September 2025 07:24:41 +0000 (0:00:00.407) 0:02:31.713 ****** 2025-09-19 07:26:53.409516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:26:53.409522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:26:53.409532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:26:53.409555 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:53.409576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:26:53.409583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:26:53.409589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:26:53.409615 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:53.409636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:26:53.409648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:26:53.409654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:26:53.409673 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:53.409679 | orchestrator | 2025-09-19 07:26:53.409688 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:26:53.409694 | orchestrator | Friday 19 September 2025 07:24:42 +0000 (0:00:00.586) 0:02:32.300 ****** 2025-09-19 07:26:53.409700 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:53.409706 | orchestrator | 2025-09-19 07:26:53.409712 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-19 07:26:53.409718 | orchestrator | Friday 19 September 2025 07:24:42 +0000 (0:00:00.489) 0:02:32.790 ****** 2025-09-19 07:26:53.409724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.409748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.409755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.409762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.409771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.409777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.409789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.409862 | orchestrator | 2025-09-19 07:26:53.409868 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-19 07:26:53.409874 | orchestrator | Friday 19 September 2025 07:24:47 +0000 (0:00:05.094) 0:02:37.884 ****** 2025-09-19 07:26:53.409880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:26:53.409887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:26:53.409896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:26:53.409922 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:53.409928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:26:53.409935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:26:53.409941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:26:53.409966 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:53.409977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:26:53.409984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:26:53.409990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.409996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.410009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:26:53.410053 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:53.410062 | orchestrator | 2025-09-19 07:26:53.410068 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-19 07:26:53.410081 | orchestrator | Friday 19 September 2025 07:24:48 +0000 (0:00:00.610) 0:02:38.494 ****** 2025-09-19 07:26:53.410087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:26:53.410098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:26:53.410105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.410111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.410117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:26:53.410128 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:53.410139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:26:53.410145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:26:53.410157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.410164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.410170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:26:53.410176 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:53.410182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:26:53.410195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:26:53.410202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.410214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:26:53.410221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:26:53.410227 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:53.410233 | orchestrator | 2025-09-19 07:26:53.410239 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-19 07:26:53.410245 | orchestrator | Friday 19 September 2025 07:24:49 +0000 (0:00:00.776) 0:02:39.271 ****** 2025-09-19 07:26:53.410251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.410265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.410271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.410281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'regist2025-09-19 07:26:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:26:53.410289 | orchestrator | ry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.410296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.410302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.410313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410429 | orchestrator | 2025-09-19 07:26:53.410435 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-19 07:26:53.410441 | orchestrator | Friday 19 September 2025 07:24:54 +0000 (0:00:05.237) 0:02:44.509 ****** 2025-09-19 07:26:53.410447 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 07:26:53.410453 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 07:26:53.410459 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 07:26:53.410464 | orchestrator | 2025-09-19 07:26:53.410470 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-19 07:26:53.410475 | orchestrator | Friday 19 September 2025 07:24:55 +0000 (0:00:01.524) 0:02:46.033 ****** 2025-09-19 07:26:53.410486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.410492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.410503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.410512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.410518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.410523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.410533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410603 | orchestrator | 2025-09-19 07:26:53.410609 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-19 07:26:53.410614 | orchestrator | Friday 19 September 2025 07:25:11 +0000 (0:00:15.933) 0:03:01.967 ****** 2025-09-19 07:26:53.410620 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.410625 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.410631 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.410636 | orchestrator | 2025-09-19 07:26:53.410642 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-19 07:26:53.410648 | orchestrator | Friday 19 September 2025 07:25:13 +0000 (0:00:01.591) 0:03:03.558 ****** 2025-09-19 07:26:53.410653 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 07:26:53.410659 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 07:26:53.410665 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 07:26:53.410670 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 07:26:53.410676 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 07:26:53.410681 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 07:26:53.410687 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 07:26:53.410692 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 07:26:53.410698 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 07:26:53.410704 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 07:26:53.410709 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 07:26:53.410715 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 07:26:53.410720 | orchestrator | 2025-09-19 07:26:53.410726 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-19 07:26:53.410734 | orchestrator | Friday 19 September 2025 07:25:18 +0000 (0:00:05.204) 0:03:08.762 ****** 2025-09-19 07:26:53.410740 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 07:26:53.410746 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 07:26:53.410751 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 07:26:53.410757 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 07:26:53.410762 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 07:26:53.410768 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 07:26:53.410773 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 07:26:53.410779 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 07:26:53.410784 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 07:26:53.410790 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 07:26:53.410801 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 07:26:53.410807 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 07:26:53.410812 | orchestrator | 2025-09-19 07:26:53.410818 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-19 07:26:53.410823 | orchestrator | Friday 19 September 2025 07:25:23 +0000 (0:00:05.010) 0:03:13.772 ****** 2025-09-19 07:26:53.410829 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 07:26:53.410834 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 07:26:53.410840 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 07:26:53.410845 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 07:26:53.410851 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 07:26:53.410857 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 07:26:53.410865 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 07:26:53.410871 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 07:26:53.410877 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 07:26:53.410882 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 07:26:53.410888 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 07:26:53.410893 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 07:26:53.410899 | orchestrator | 2025-09-19 07:26:53.410905 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-19 07:26:53.410910 | orchestrator | Friday 19 September 2025 07:25:28 +0000 (0:00:04.867) 0:03:18.639 ****** 2025-09-19 07:26:53.410916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.410922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.410931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:53.410941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.410950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.410956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:26:53.410962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.410992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.411001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:26:53.411007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.411013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.411019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:53.411028 | orchestrator | 2025-09-19 07:26:53.411034 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:26:53.411039 | orchestrator | Friday 19 September 2025 07:25:32 +0000 (0:00:03.622) 0:03:22.262 ****** 2025-09-19 07:26:53.411045 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:53.411051 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:53.411056 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:53.411062 | orchestrator | 2025-09-19 07:26:53.411071 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-19 07:26:53.411076 | orchestrator | Friday 19 September 2025 07:25:32 +0000 (0:00:00.307) 0:03:22.569 ****** 2025-09-19 07:26:53.411082 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.411087 | orchestrator | 2025-09-19 07:26:53.411093 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-19 07:26:53.411099 | orchestrator | Friday 19 September 2025 07:25:34 +0000 (0:00:01.988) 0:03:24.558 ****** 2025-09-19 07:26:53.411104 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.411110 | orchestrator | 2025-09-19 07:26:53.411115 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-19 07:26:53.411121 | orchestrator | Friday 19 September 2025 07:25:36 +0000 (0:00:02.436) 0:03:26.995 ****** 2025-09-19 07:26:53.411126 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.411132 | orchestrator | 2025-09-19 07:26:53.411137 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-19 07:26:53.411144 | orchestrator | Friday 19 September 2025 07:25:38 +0000 (0:00:02.124) 0:03:29.120 ****** 2025-09-19 07:26:53.411149 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.411155 | orchestrator | 2025-09-19 07:26:53.411160 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-19 07:26:53.411166 | orchestrator | Friday 19 September 2025 07:25:41 +0000 (0:00:02.217) 0:03:31.337 ****** 2025-09-19 07:26:53.411172 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.411177 | orchestrator | 2025-09-19 07:26:53.411183 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 07:26:53.411188 | orchestrator | Friday 19 September 2025 07:26:01 +0000 (0:00:20.735) 0:03:52.073 ****** 2025-09-19 07:26:53.411194 | orchestrator | 2025-09-19 07:26:53.411199 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 07:26:53.411205 | orchestrator | Friday 19 September 2025 07:26:01 +0000 (0:00:00.061) 0:03:52.134 ****** 2025-09-19 07:26:53.411211 | orchestrator | 2025-09-19 07:26:53.411216 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 07:26:53.411225 | orchestrator | Friday 19 September 2025 07:26:01 +0000 (0:00:00.061) 0:03:52.195 ****** 2025-09-19 07:26:53.411231 | orchestrator | 2025-09-19 07:26:53.411237 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-19 07:26:53.411242 | orchestrator | Friday 19 September 2025 07:26:02 +0000 (0:00:00.067) 0:03:52.263 ****** 2025-09-19 07:26:53.411248 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.411253 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.411259 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.411265 | orchestrator | 2025-09-19 07:26:53.411270 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-19 07:26:53.411276 | orchestrator | Friday 19 September 2025 07:26:18 +0000 (0:00:15.975) 0:04:08.238 ****** 2025-09-19 07:26:53.411281 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.411287 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.411292 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.411298 | orchestrator | 2025-09-19 07:26:53.411303 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-19 07:26:53.411309 | orchestrator | Friday 19 September 2025 07:26:24 +0000 (0:00:06.397) 0:04:14.635 ****** 2025-09-19 07:26:53.411327 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.411333 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.411338 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.411348 | orchestrator | 2025-09-19 07:26:53.411354 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-19 07:26:53.411360 | orchestrator | Friday 19 September 2025 07:26:34 +0000 (0:00:10.306) 0:04:24.942 ****** 2025-09-19 07:26:53.411365 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.411371 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.411376 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.411382 | orchestrator | 2025-09-19 07:26:53.411388 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-19 07:26:53.411393 | orchestrator | Friday 19 September 2025 07:26:39 +0000 (0:00:05.129) 0:04:30.071 ****** 2025-09-19 07:26:53.411399 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:53.411404 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:53.411410 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:53.411416 | orchestrator | 2025-09-19 07:26:53.411421 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:26:53.411427 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:26:53.411434 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:26:53.411439 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:26:53.411445 | orchestrator | 2025-09-19 07:26:53.411451 | orchestrator | 2025-09-19 07:26:53.411456 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:26:53.411462 | orchestrator | Friday 19 September 2025 07:26:50 +0000 (0:00:10.641) 0:04:40.712 ****** 2025-09-19 07:26:53.411467 | orchestrator | =============================================================================== 2025-09-19 07:26:53.411473 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.74s 2025-09-19 07:26:53.411479 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.54s 2025-09-19 07:26:53.411484 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.98s 2025-09-19 07:26:53.411490 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.93s 2025-09-19 07:26:53.411495 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.81s 2025-09-19 07:26:53.411504 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.79s 2025-09-19 07:26:53.411510 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.64s 2025-09-19 07:26:53.411515 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.31s 2025-09-19 07:26:53.411521 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.99s 2025-09-19 07:26:53.411527 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.82s 2025-09-19 07:26:53.411532 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.57s 2025-09-19 07:26:53.411538 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.56s 2025-09-19 07:26:53.411543 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.40s 2025-09-19 07:26:53.411549 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.23s 2025-09-19 07:26:53.411554 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.47s 2025-09-19 07:26:53.411560 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.24s 2025-09-19 07:26:53.411565 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.20s 2025-09-19 07:26:53.411571 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.13s 2025-09-19 07:26:53.411576 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.09s 2025-09-19 07:26:53.411586 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.08s 2025-09-19 07:26:56.450957 | orchestrator | 2025-09-19 07:26:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:26:59.491581 | orchestrator | 2025-09-19 07:26:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:02.531857 | orchestrator | 2025-09-19 07:27:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:05.573066 | orchestrator | 2025-09-19 07:27:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:08.611664 | orchestrator | 2025-09-19 07:27:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:11.657895 | orchestrator | 2025-09-19 07:27:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:14.699205 | orchestrator | 2025-09-19 07:27:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:17.740471 | orchestrator | 2025-09-19 07:27:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:20.782907 | orchestrator | 2025-09-19 07:27:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:23.823323 | orchestrator | 2025-09-19 07:27:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:26.863729 | orchestrator | 2025-09-19 07:27:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:29.899918 | orchestrator | 2025-09-19 07:27:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:32.931322 | orchestrator | 2025-09-19 07:27:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:35.972237 | orchestrator | 2025-09-19 07:27:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:39.006545 | orchestrator | 2025-09-19 07:27:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:42.052284 | orchestrator | 2025-09-19 07:27:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:45.090820 | orchestrator | 2025-09-19 07:27:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:48.134569 | orchestrator | 2025-09-19 07:27:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:51.175165 | orchestrator | 2025-09-19 07:27:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:27:54.214874 | orchestrator | 2025-09-19 07:27:54.511823 | orchestrator | 2025-09-19 07:27:54.517706 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Sep 19 07:27:54 UTC 2025 2025-09-19 07:27:54.517741 | orchestrator | 2025-09-19 07:27:54.935204 | orchestrator | ok: Runtime: 0:32:46.662640 2025-09-19 07:27:55.190301 | 2025-09-19 07:27:55.190527 | TASK [Bootstrap services] 2025-09-19 07:27:56.043339 | orchestrator | 2025-09-19 07:27:56.043567 | orchestrator | # BOOTSTRAP 2025-09-19 07:27:56.043589 | orchestrator | 2025-09-19 07:27:56.043602 | orchestrator | + set -e 2025-09-19 07:27:56.043615 | orchestrator | + echo 2025-09-19 07:27:56.043629 | orchestrator | + echo '# BOOTSTRAP' 2025-09-19 07:27:56.043647 | orchestrator | + echo 2025-09-19 07:27:56.043694 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-19 07:27:56.054685 | orchestrator | + set -e 2025-09-19 07:27:56.054738 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-19 07:28:00.379973 | orchestrator | 2025-09-19 07:28:00 | INFO  | It takes a moment until task 5d65d0f9-b9a9-478d-910c-9db5aeedbdaa (flavor-manager) has been started and output is visible here. 2025-09-19 07:28:06.614901 | orchestrator | 2025-09-19 07:28:04 | INFO  | Flavor SCS-1V-4 created 2025-09-19 07:28:06.615010 | orchestrator | 2025-09-19 07:28:04 | INFO  | Flavor SCS-2V-8 created 2025-09-19 07:28:06.615027 | orchestrator | 2025-09-19 07:28:04 | INFO  | Flavor SCS-4V-16 created 2025-09-19 07:28:06.615039 | orchestrator | 2025-09-19 07:28:04 | INFO  | Flavor SCS-8V-32 created 2025-09-19 07:28:06.615050 | orchestrator | 2025-09-19 07:28:05 | INFO  | Flavor SCS-1V-2 created 2025-09-19 07:28:06.615061 | orchestrator | 2025-09-19 07:28:05 | INFO  | Flavor SCS-2V-4 created 2025-09-19 07:28:06.615072 | orchestrator | 2025-09-19 07:28:05 | INFO  | Flavor SCS-4V-8 created 2025-09-19 07:28:06.615084 | orchestrator | 2025-09-19 07:28:05 | INFO  | Flavor SCS-8V-16 created 2025-09-19 07:28:06.615109 | orchestrator | 2025-09-19 07:28:05 | INFO  | Flavor SCS-16V-32 created 2025-09-19 07:28:06.615121 | orchestrator | 2025-09-19 07:28:05 | INFO  | Flavor SCS-1V-8 created 2025-09-19 07:28:06.615132 | orchestrator | 2025-09-19 07:28:05 | INFO  | Flavor SCS-2V-16 created 2025-09-19 07:28:06.615144 | orchestrator | 2025-09-19 07:28:05 | INFO  | Flavor SCS-4V-32 created 2025-09-19 07:28:06.615155 | orchestrator | 2025-09-19 07:28:06 | INFO  | Flavor SCS-1L-1 created 2025-09-19 07:28:06.615166 | orchestrator | 2025-09-19 07:28:06 | INFO  | Flavor SCS-2V-4-20s created 2025-09-19 07:28:06.615177 | orchestrator | 2025-09-19 07:28:06 | INFO  | Flavor SCS-4V-16-100s created 2025-09-19 07:28:08.758371 | orchestrator | 2025-09-19 07:28:08 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-19 07:28:18.994770 | orchestrator | 2025-09-19 07:28:18 | INFO  | Task bb40d599-87a8-4162-84d1-6d0e212c9067 (bootstrap-basic) was prepared for execution. 2025-09-19 07:28:18.994879 | orchestrator | 2025-09-19 07:28:18 | INFO  | It takes a moment until task bb40d599-87a8-4162-84d1-6d0e212c9067 (bootstrap-basic) has been started and output is visible here. 2025-09-19 07:29:17.644162 | orchestrator | 2025-09-19 07:29:17.644263 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-19 07:29:17.644276 | orchestrator | 2025-09-19 07:29:17.644285 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 07:29:17.644296 | orchestrator | Friday 19 September 2025 07:28:22 +0000 (0:00:00.066) 0:00:00.066 ****** 2025-09-19 07:29:17.644306 | orchestrator | ok: [localhost] 2025-09-19 07:29:17.644314 | orchestrator | 2025-09-19 07:29:17.644322 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-19 07:29:17.644329 | orchestrator | Friday 19 September 2025 07:28:24 +0000 (0:00:01.582) 0:00:01.649 ****** 2025-09-19 07:29:17.644336 | orchestrator | ok: [localhost] 2025-09-19 07:29:17.644343 | orchestrator | 2025-09-19 07:29:17.644351 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-19 07:29:17.644358 | orchestrator | Friday 19 September 2025 07:28:32 +0000 (0:00:08.188) 0:00:09.838 ****** 2025-09-19 07:29:17.644365 | orchestrator | changed: [localhost] 2025-09-19 07:29:17.644372 | orchestrator | 2025-09-19 07:29:17.644380 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-19 07:29:17.644412 | orchestrator | Friday 19 September 2025 07:28:40 +0000 (0:00:07.286) 0:00:17.125 ****** 2025-09-19 07:29:17.644419 | orchestrator | ok: [localhost] 2025-09-19 07:29:17.644426 | orchestrator | 2025-09-19 07:29:17.644433 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-19 07:29:17.644440 | orchestrator | Friday 19 September 2025 07:28:47 +0000 (0:00:07.365) 0:00:24.490 ****** 2025-09-19 07:29:17.644447 | orchestrator | changed: [localhost] 2025-09-19 07:29:17.644454 | orchestrator | 2025-09-19 07:29:17.644498 | orchestrator | TASK [Create public network] *************************************************** 2025-09-19 07:29:17.644505 | orchestrator | Friday 19 September 2025 07:28:54 +0000 (0:00:06.678) 0:00:31.168 ****** 2025-09-19 07:29:17.644511 | orchestrator | changed: [localhost] 2025-09-19 07:29:17.644518 | orchestrator | 2025-09-19 07:29:17.644524 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-19 07:29:17.644531 | orchestrator | Friday 19 September 2025 07:28:59 +0000 (0:00:05.008) 0:00:36.177 ****** 2025-09-19 07:29:17.644537 | orchestrator | changed: [localhost] 2025-09-19 07:29:17.644544 | orchestrator | 2025-09-19 07:29:17.644551 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-19 07:29:17.644558 | orchestrator | Friday 19 September 2025 07:29:05 +0000 (0:00:06.285) 0:00:42.463 ****** 2025-09-19 07:29:17.644565 | orchestrator | changed: [localhost] 2025-09-19 07:29:17.644572 | orchestrator | 2025-09-19 07:29:17.644579 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-19 07:29:17.644586 | orchestrator | Friday 19 September 2025 07:29:09 +0000 (0:00:04.426) 0:00:46.889 ****** 2025-09-19 07:29:17.644592 | orchestrator | changed: [localhost] 2025-09-19 07:29:17.644599 | orchestrator | 2025-09-19 07:29:17.644606 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-19 07:29:17.644613 | orchestrator | Friday 19 September 2025 07:29:14 +0000 (0:00:04.302) 0:00:51.192 ****** 2025-09-19 07:29:17.644619 | orchestrator | ok: [localhost] 2025-09-19 07:29:17.644625 | orchestrator | 2025-09-19 07:29:17.644631 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:29:17.644638 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:29:17.644646 | orchestrator | 2025-09-19 07:29:17.644652 | orchestrator | 2025-09-19 07:29:17.644658 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:29:17.644664 | orchestrator | Friday 19 September 2025 07:29:17 +0000 (0:00:03.394) 0:00:54.587 ****** 2025-09-19 07:29:17.644671 | orchestrator | =============================================================================== 2025-09-19 07:29:17.644677 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.19s 2025-09-19 07:29:17.644684 | orchestrator | Get volume type local --------------------------------------------------- 7.37s 2025-09-19 07:29:17.644691 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.29s 2025-09-19 07:29:17.644698 | orchestrator | Create volume type local ------------------------------------------------ 6.68s 2025-09-19 07:29:17.644704 | orchestrator | Set public network to default ------------------------------------------- 6.29s 2025-09-19 07:29:17.644711 | orchestrator | Create public network --------------------------------------------------- 5.01s 2025-09-19 07:29:17.644728 | orchestrator | Create public subnet ---------------------------------------------------- 4.43s 2025-09-19 07:29:17.644735 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.30s 2025-09-19 07:29:17.644742 | orchestrator | Create manager role ----------------------------------------------------- 3.40s 2025-09-19 07:29:17.644749 | orchestrator | Gathering Facts --------------------------------------------------------- 1.58s 2025-09-19 07:29:19.530321 | orchestrator | 2025-09-19 07:29:19 | INFO  | It takes a moment until task 7d2e317f-7548-496e-b8d3-7355777a2d49 (image-manager) has been started and output is visible here. 2025-09-19 07:30:19.863436 | orchestrator | 2025-09-19 07:29:22 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-19 07:30:19.863599 | orchestrator | 2025-09-19 07:29:23 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-19 07:30:19.863618 | orchestrator | 2025-09-19 07:29:23 | INFO  | Importing image Cirros 0.6.2 2025-09-19 07:30:19.863630 | orchestrator | 2025-09-19 07:29:23 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 07:30:19.863643 | orchestrator | 2025-09-19 07:29:24 | INFO  | Waiting for image to leave queued state... 2025-09-19 07:30:19.863655 | orchestrator | 2025-09-19 07:29:26 | INFO  | Waiting for import to complete... 2025-09-19 07:30:19.863666 | orchestrator | 2025-09-19 07:29:37 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-19 07:30:19.863678 | orchestrator | 2025-09-19 07:29:37 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-19 07:30:19.863690 | orchestrator | 2025-09-19 07:29:37 | INFO  | Setting internal_version = 0.6.2 2025-09-19 07:30:19.863702 | orchestrator | 2025-09-19 07:29:37 | INFO  | Setting image_original_user = cirros 2025-09-19 07:30:19.863713 | orchestrator | 2025-09-19 07:29:37 | INFO  | Adding tag os:cirros 2025-09-19 07:30:19.863724 | orchestrator | 2025-09-19 07:29:37 | INFO  | Setting property architecture: x86_64 2025-09-19 07:30:19.863735 | orchestrator | 2025-09-19 07:29:37 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 07:30:19.863746 | orchestrator | 2025-09-19 07:29:38 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 07:30:19.863757 | orchestrator | 2025-09-19 07:29:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 07:30:19.863768 | orchestrator | 2025-09-19 07:29:38 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 07:30:19.863780 | orchestrator | 2025-09-19 07:29:38 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 07:30:19.863791 | orchestrator | 2025-09-19 07:29:39 | INFO  | Setting property os_distro: cirros 2025-09-19 07:30:19.863802 | orchestrator | 2025-09-19 07:29:39 | INFO  | Setting property replace_frequency: never 2025-09-19 07:30:19.863813 | orchestrator | 2025-09-19 07:29:39 | INFO  | Setting property uuid_validity: none 2025-09-19 07:30:19.863824 | orchestrator | 2025-09-19 07:29:39 | INFO  | Setting property provided_until: none 2025-09-19 07:30:19.863835 | orchestrator | 2025-09-19 07:29:39 | INFO  | Setting property image_description: Cirros 2025-09-19 07:30:19.863846 | orchestrator | 2025-09-19 07:29:40 | INFO  | Setting property image_name: Cirros 2025-09-19 07:30:19.863857 | orchestrator | 2025-09-19 07:29:40 | INFO  | Setting property internal_version: 0.6.2 2025-09-19 07:30:19.863869 | orchestrator | 2025-09-19 07:29:40 | INFO  | Setting property image_original_user: cirros 2025-09-19 07:30:19.863880 | orchestrator | 2025-09-19 07:29:40 | INFO  | Setting property os_version: 0.6.2 2025-09-19 07:30:19.863899 | orchestrator | 2025-09-19 07:29:40 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 07:30:19.863912 | orchestrator | 2025-09-19 07:29:41 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-19 07:30:19.863923 | orchestrator | 2025-09-19 07:29:41 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-19 07:30:19.863934 | orchestrator | 2025-09-19 07:29:41 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-19 07:30:19.863945 | orchestrator | 2025-09-19 07:29:41 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-19 07:30:19.863956 | orchestrator | 2025-09-19 07:29:41 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-19 07:30:19.863977 | orchestrator | 2025-09-19 07:29:41 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-19 07:30:19.863990 | orchestrator | 2025-09-19 07:29:41 | INFO  | Importing image Cirros 0.6.3 2025-09-19 07:30:19.864003 | orchestrator | 2025-09-19 07:29:41 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 07:30:19.864015 | orchestrator | 2025-09-19 07:29:42 | INFO  | Waiting for image to leave queued state... 2025-09-19 07:30:19.864028 | orchestrator | 2025-09-19 07:29:44 | INFO  | Waiting for import to complete... 2025-09-19 07:30:19.864045 | orchestrator | 2025-09-19 07:29:54 | INFO  | Waiting for import to complete... 2025-09-19 07:30:19.864076 | orchestrator | 2025-09-19 07:30:04 | INFO  | Waiting for import to complete... 2025-09-19 07:30:19.864090 | orchestrator | 2025-09-19 07:30:14 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-19 07:30:19.864102 | orchestrator | 2025-09-19 07:30:15 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-19 07:30:19.864115 | orchestrator | 2025-09-19 07:30:15 | INFO  | Setting internal_version = 0.6.3 2025-09-19 07:30:19.864127 | orchestrator | 2025-09-19 07:30:15 | INFO  | Setting image_original_user = cirros 2025-09-19 07:30:19.864139 | orchestrator | 2025-09-19 07:30:15 | INFO  | Adding tag os:cirros 2025-09-19 07:30:19.864151 | orchestrator | 2025-09-19 07:30:15 | INFO  | Setting property architecture: x86_64 2025-09-19 07:30:19.864163 | orchestrator | 2025-09-19 07:30:15 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 07:30:19.864176 | orchestrator | 2025-09-19 07:30:15 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 07:30:19.864188 | orchestrator | 2025-09-19 07:30:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 07:30:19.864201 | orchestrator | 2025-09-19 07:30:16 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 07:30:19.864214 | orchestrator | 2025-09-19 07:30:16 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 07:30:19.864226 | orchestrator | 2025-09-19 07:30:16 | INFO  | Setting property os_distro: cirros 2025-09-19 07:30:19.864238 | orchestrator | 2025-09-19 07:30:17 | INFO  | Setting property replace_frequency: never 2025-09-19 07:30:19.864251 | orchestrator | 2025-09-19 07:30:17 | INFO  | Setting property uuid_validity: none 2025-09-19 07:30:19.864263 | orchestrator | 2025-09-19 07:30:17 | INFO  | Setting property provided_until: none 2025-09-19 07:30:19.864276 | orchestrator | 2025-09-19 07:30:17 | INFO  | Setting property image_description: Cirros 2025-09-19 07:30:19.864288 | orchestrator | 2025-09-19 07:30:18 | INFO  | Setting property image_name: Cirros 2025-09-19 07:30:19.864300 | orchestrator | 2025-09-19 07:30:18 | INFO  | Setting property internal_version: 0.6.3 2025-09-19 07:30:19.864312 | orchestrator | 2025-09-19 07:30:18 | INFO  | Setting property image_original_user: cirros 2025-09-19 07:30:19.864324 | orchestrator | 2025-09-19 07:30:18 | INFO  | Setting property os_version: 0.6.3 2025-09-19 07:30:19.864335 | orchestrator | 2025-09-19 07:30:18 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 07:30:19.864346 | orchestrator | 2025-09-19 07:30:19 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-19 07:30:19.864357 | orchestrator | 2025-09-19 07:30:19 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-19 07:30:19.864367 | orchestrator | 2025-09-19 07:30:19 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-19 07:30:19.864390 | orchestrator | 2025-09-19 07:30:19 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-19 07:30:20.049967 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-19 07:30:21.841452 | orchestrator | 2025-09-19 07:30:21 | INFO  | date: 2025-09-19 2025-09-19 07:30:21.841575 | orchestrator | 2025-09-19 07:30:21 | INFO  | image: octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 07:30:21.841592 | orchestrator | 2025-09-19 07:30:21 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 07:30:21.841901 | orchestrator | 2025-09-19 07:30:21 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2.CHECKSUM 2025-09-19 07:30:21.881690 | orchestrator | 2025-09-19 07:30:21 | INFO  | checksum: cb1f8a9bf0aeb0e92074b04499e688b0043001241167a8bf8df49931cc66885f 2025-09-19 07:30:21.952942 | orchestrator | 2025-09-19 07:30:21 | INFO  | It takes a moment until task 2374c289-e4be-4f2c-ae10-4df150b3d2ca (image-manager) has been started and output is visible here. 2025-09-19 07:31:22.656937 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-19 07:31:22.657051 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-09-19 07:31:22.657069 | orchestrator | 2025-09-19 07:30:24 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 07:31:22.657083 | orchestrator | 2025-09-19 07:30:24 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2: 200 2025-09-19 07:31:22.657096 | orchestrator | 2025-09-19 07:30:24 | INFO  | Importing image OpenStack Octavia Amphora 2025-09-19 2025-09-19 07:31:22.657108 | orchestrator | 2025-09-19 07:30:24 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 07:31:22.657121 | orchestrator | 2025-09-19 07:30:25 | INFO  | Waiting for image to leave queued state... 2025-09-19 07:31:22.657132 | orchestrator | 2025-09-19 07:30:27 | INFO  | Waiting for import to complete... 2025-09-19 07:31:22.657144 | orchestrator | 2025-09-19 07:30:37 | INFO  | Waiting for import to complete... 2025-09-19 07:31:22.657155 | orchestrator | 2025-09-19 07:30:47 | INFO  | Waiting for import to complete... 2025-09-19 07:31:22.657165 | orchestrator | 2025-09-19 07:30:57 | INFO  | Waiting for import to complete... 2025-09-19 07:31:22.657176 | orchestrator | 2025-09-19 07:31:07 | INFO  | Waiting for import to complete... 2025-09-19 07:31:22.657187 | orchestrator | 2025-09-19 07:31:17 | INFO  | Import of 'OpenStack Octavia Amphora 2025-09-19' successfully completed, reloading images 2025-09-19 07:31:22.657200 | orchestrator | 2025-09-19 07:31:18 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 07:31:22.657212 | orchestrator | 2025-09-19 07:31:18 | INFO  | Setting internal_version = 2025-09-19 2025-09-19 07:31:22.657223 | orchestrator | 2025-09-19 07:31:18 | INFO  | Setting image_original_user = ubuntu 2025-09-19 07:31:22.657256 | orchestrator | 2025-09-19 07:31:18 | INFO  | Adding tag amphora 2025-09-19 07:31:22.657269 | orchestrator | 2025-09-19 07:31:18 | INFO  | Adding tag os:ubuntu 2025-09-19 07:31:22.657280 | orchestrator | 2025-09-19 07:31:18 | INFO  | Setting property architecture: x86_64 2025-09-19 07:31:22.657291 | orchestrator | 2025-09-19 07:31:19 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 07:31:22.657301 | orchestrator | 2025-09-19 07:31:19 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 07:31:22.657312 | orchestrator | 2025-09-19 07:31:19 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 07:31:22.657323 | orchestrator | 2025-09-19 07:31:19 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 07:31:22.657334 | orchestrator | 2025-09-19 07:31:19 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 07:31:22.657345 | orchestrator | 2025-09-19 07:31:20 | INFO  | Setting property os_distro: ubuntu 2025-09-19 07:31:22.657356 | orchestrator | 2025-09-19 07:31:20 | INFO  | Setting property replace_frequency: quarterly 2025-09-19 07:31:22.657366 | orchestrator | 2025-09-19 07:31:20 | INFO  | Setting property uuid_validity: last-1 2025-09-19 07:31:22.657377 | orchestrator | 2025-09-19 07:31:20 | INFO  | Setting property provided_until: none 2025-09-19 07:31:22.657388 | orchestrator | 2025-09-19 07:31:20 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-09-19 07:31:22.657399 | orchestrator | 2025-09-19 07:31:21 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-09-19 07:31:22.657410 | orchestrator | 2025-09-19 07:31:21 | INFO  | Setting property internal_version: 2025-09-19 2025-09-19 07:31:22.657420 | orchestrator | 2025-09-19 07:31:21 | INFO  | Setting property image_original_user: ubuntu 2025-09-19 07:31:22.657431 | orchestrator | 2025-09-19 07:31:21 | INFO  | Setting property os_version: 2025-09-19 2025-09-19 07:31:22.657443 | orchestrator | 2025-09-19 07:31:21 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 07:31:22.657469 | orchestrator | 2025-09-19 07:31:22 | INFO  | Setting property image_build_date: 2025-09-19 2025-09-19 07:31:22.657493 | orchestrator | 2025-09-19 07:31:22 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 07:31:22.657506 | orchestrator | 2025-09-19 07:31:22 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 07:31:22.657519 | orchestrator | 2025-09-19 07:31:22 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-09-19 07:31:22.657531 | orchestrator | 2025-09-19 07:31:22 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-09-19 07:31:22.657545 | orchestrator | 2025-09-19 07:31:22 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-09-19 07:31:22.657557 | orchestrator | 2025-09-19 07:31:22 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-09-19 07:31:23.377509 | orchestrator | ok: Runtime: 0:03:27.292136 2025-09-19 07:31:23.400753 | 2025-09-19 07:31:23.400901 | TASK [Run checks] 2025-09-19 07:31:24.079263 | orchestrator | + set -e 2025-09-19 07:31:24.079391 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 07:31:24.079403 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 07:31:24.079412 | orchestrator | ++ INTERACTIVE=false 2025-09-19 07:31:24.079417 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 07:31:24.079422 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 07:31:24.079428 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 07:31:24.080053 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 07:31:24.086652 | orchestrator | 2025-09-19 07:31:24.086762 | orchestrator | # CHECK 2025-09-19 07:31:24.086780 | orchestrator | 2025-09-19 07:31:24.086795 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 07:31:24.086814 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 07:31:24.086827 | orchestrator | + echo 2025-09-19 07:31:24.086841 | orchestrator | + echo '# CHECK' 2025-09-19 07:31:24.086854 | orchestrator | + echo 2025-09-19 07:31:24.086872 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 07:31:24.087507 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 07:31:24.145420 | orchestrator | 2025-09-19 07:31:24.145521 | orchestrator | ## Containers @ testbed-manager 2025-09-19 07:31:24.145537 | orchestrator | 2025-09-19 07:31:24.145551 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 07:31:24.145585 | orchestrator | + echo 2025-09-19 07:31:24.145597 | orchestrator | + echo '## Containers @ testbed-manager' 2025-09-19 07:31:24.145609 | orchestrator | + echo 2025-09-19 07:31:24.145620 | orchestrator | + osism container testbed-manager ps 2025-09-19 07:31:26.427018 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 07:31:26.427137 | orchestrator | 81a1f7963ae4 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-09-19 07:31:26.427159 | orchestrator | 30d6689bbe38 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-09-19 07:31:26.427178 | orchestrator | 27b12386a4fa registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 07:31:26.427189 | orchestrator | e0c068dd3d38 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-09-19 07:31:26.427200 | orchestrator | 03396175a88a registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-09-19 07:31:26.427211 | orchestrator | 543aba194143 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-09-19 07:31:26.427225 | orchestrator | 1e6f99c9515e registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 28 minutes cron 2025-09-19 07:31:26.427236 | orchestrator | 327308920be7 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 07:31:26.427246 | orchestrator | a38d119ea25c registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-09-19 07:31:26.427281 | orchestrator | 003f421e4704 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2025-09-19 07:31:26.427292 | orchestrator | 947712b704b2 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 30 minutes openstackclient 2025-09-19 07:31:26.427302 | orchestrator | 6120177a683b registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 30 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2025-09-19 07:31:26.427312 | orchestrator | 89df70692ab4 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 37 minutes ago Up 37 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-09-19 07:31:26.427322 | orchestrator | 2acce61a973e registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-09-19 07:31:26.427352 | orchestrator | eb6ce51ba757 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" 56 minutes ago Up 36 minutes (healthy) manager-inventory_reconciler-1 2025-09-19 07:31:26.427363 | orchestrator | 2b315269784d registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" 56 minutes ago Up 37 minutes (healthy) ceph-ansible 2025-09-19 07:31:26.427378 | orchestrator | 58a414c6295b registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" 56 minutes ago Up 37 minutes (healthy) osism-ansible 2025-09-19 07:31:26.427388 | orchestrator | be1e12c596c7 registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" 56 minutes ago Up 37 minutes (healthy) kolla-ansible 2025-09-19 07:31:26.427398 | orchestrator | 6530d051e6de registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" 56 minutes ago Up 37 minutes (healthy) osism-kubernetes 2025-09-19 07:31:26.427408 | orchestrator | 762513219bbe registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 56 minutes ago Up 37 minutes (healthy) 8000/tcp manager-ara-server-1 2025-09-19 07:31:26.427418 | orchestrator | e6c37ad7fc6e registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 56 minutes ago Up 37 minutes (healthy) 6379/tcp manager-redis-1 2025-09-19 07:31:26.427427 | orchestrator | 842b4292be96 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-flower-1 2025-09-19 07:31:26.427438 | orchestrator | 6857ed6d6a2a registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-beat-1 2025-09-19 07:31:26.427455 | orchestrator | d2373b8e386f registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-openstack-1 2025-09-19 07:31:26.427465 | orchestrator | be9a030aa44b registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-listener-1 2025-09-19 07:31:26.427475 | orchestrator | 7a1c0b5e74c1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" 56 minutes ago Up 37 minutes (healthy) osismclient 2025-09-19 07:31:26.427485 | orchestrator | e6e111314113 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-09-19 07:31:26.427495 | orchestrator | 65b94d2c7790 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" 56 minutes ago Up 37 minutes (healthy) 3306/tcp manager-mariadb-1 2025-09-19 07:31:26.427505 | orchestrator | e92ca2db8bc8 registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-09-19 07:31:26.711147 | orchestrator | 2025-09-19 07:31:26.711245 | orchestrator | ## Images @ testbed-manager 2025-09-19 07:31:26.711261 | orchestrator | 2025-09-19 07:31:26.711274 | orchestrator | + echo 2025-09-19 07:31:26.711286 | orchestrator | + echo '## Images @ testbed-manager' 2025-09-19 07:31:26.711299 | orchestrator | + echo 2025-09-19 07:31:26.711310 | orchestrator | + osism container testbed-manager images 2025-09-19 07:31:28.891134 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 07:31:28.903412 | orchestrator | registry.osism.tech/osism/osism-frontend latest d3ad1e2f93bf 52 minutes ago 236MB 2025-09-19 07:31:28.903481 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 84cc807d7f93 4 hours ago 243MB 2025-09-19 07:31:28.903526 | orchestrator | registry.osism.tech/osism/osism-frontend 7bc80eb2be93 7 hours ago 236MB 2025-09-19 07:31:28.903548 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d3334946e20e 6 weeks ago 11.5MB 2025-09-19 07:31:28.903604 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 2 months ago 571MB 2025-09-19 07:31:28.903626 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 07:31:28.903638 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 07:31:28.903649 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 07:31:28.903660 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 2 months ago 891MB 2025-09-19 07:31:28.903670 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 2 months ago 360MB 2025-09-19 07:31:28.903681 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 2 months ago 456MB 2025-09-19 07:31:28.903717 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 07:31:28.903728 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 07:31:28.903739 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 2 months ago 575MB 2025-09-19 07:31:28.903750 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 2 months ago 535MB 2025-09-19 07:31:28.903761 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 2 months ago 308MB 2025-09-19 07:31:28.903771 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 2 months ago 1.21GB 2025-09-19 07:31:28.903782 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 2 months ago 310MB 2025-09-19 07:31:28.903792 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 2 months ago 41.4MB 2025-09-19 07:31:28.903803 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 months ago 226MB 2025-09-19 07:31:28.903813 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 dae0c92b7b63 3 months ago 329MB 2025-09-19 07:31:28.903824 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 4 months ago 453MB 2025-09-19 07:31:28.903834 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-09-19 07:31:28.903845 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 12 months ago 300MB 2025-09-19 07:31:28.903856 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 15 months ago 146MB 2025-09-19 07:31:29.169448 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 07:31:29.170068 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 07:31:29.232208 | orchestrator | 2025-09-19 07:31:29.232298 | orchestrator | ## Containers @ testbed-node-0 2025-09-19 07:31:29.232311 | orchestrator | 2025-09-19 07:31:29.232323 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 07:31:29.232335 | orchestrator | + echo 2025-09-19 07:31:29.232346 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-09-19 07:31:29.232358 | orchestrator | + echo 2025-09-19 07:31:29.232370 | orchestrator | + osism container testbed-node-0 ps 2025-09-19 07:31:31.684957 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 07:31:31.685046 | orchestrator | 71574952ccb9 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-19 07:31:31.685062 | orchestrator | db91966f9b61 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-19 07:31:31.685075 | orchestrator | 496045cdcade registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-09-19 07:31:31.685086 | orchestrator | a35ac3f4bda4 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-09-19 07:31:31.685114 | orchestrator | 08f12be1fe02 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-19 07:31:31.685127 | orchestrator | 0dad978f7c9e registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-09-19 07:31:31.685158 | orchestrator | 51c29be1133d registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-09-19 07:31:31.685169 | orchestrator | dccf50c06b14 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-19 07:31:31.685180 | orchestrator | 63e7b91cdf46 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-09-19 07:31:31.685191 | orchestrator | b4a1b4ce08b5 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-09-19 07:31:31.685202 | orchestrator | 3961bf4086bb registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-19 07:31:31.685212 | orchestrator | 9d9d9a925363 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-09-19 07:31:31.685223 | orchestrator | 8498ad58f11b registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-09-19 07:31:31.685234 | orchestrator | fc7053d3372a registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-09-19 07:31:31.685245 | orchestrator | 0145362a7994 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-19 07:31:31.685256 | orchestrator | 465aefa995fe registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-09-19 07:31:31.685267 | orchestrator | 377ebe69dcc0 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-19 07:31:31.685277 | orchestrator | bfe291c089b4 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-09-19 07:31:31.685288 | orchestrator | c0e65b7fb8a3 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-09-19 07:31:31.685317 | orchestrator | 304f62965949 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-09-19 07:31:31.685328 | orchestrator | d11838f28ebd registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-09-19 07:31:31.685339 | orchestrator | 3e2c7fe8eda0 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-09-19 07:31:31.685350 | orchestrator | 61f60461fcc2 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-19 07:31:31.685366 | orchestrator | 90fa02b15e4e registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-09-19 07:31:31.685384 | orchestrator | ba49a2be4e48 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-09-19 07:31:31.685396 | orchestrator | 7a786c8be4d5 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-19 07:31:31.685408 | orchestrator | 6d811d1406df registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 07:31:31.685439 | orchestrator | 6c21ba899405 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-09-19 07:31:31.685461 | orchestrator | 4ce961289526 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-09-19 07:31:31.685472 | orchestrator | 3d77e5248354 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-09-19 07:31:31.685483 | orchestrator | 535950ecd03d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-09-19 07:31:31.685494 | orchestrator | c6b24ff8ea96 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-09-19 07:31:31.685505 | orchestrator | 42c6a31633cc registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-09-19 07:31:31.685517 | orchestrator | 3bce677a46f4 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-09-19 07:31:31.685532 | orchestrator | 4d41b3e5cbb8 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-19 07:31:31.685544 | orchestrator | 507b56d4ed8d registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-09-19 07:31:31.685555 | orchestrator | c965e00e2e4f registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-09-19 07:31:31.685566 | orchestrator | 040162eca36e registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-09-19 07:31:31.685595 | orchestrator | e00c3622ff4b registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-19 07:31:31.685607 | orchestrator | 5d370d255309 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-09-19 07:31:31.685626 | orchestrator | 41229a9fe040 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-09-19 07:31:31.685637 | orchestrator | 69489381d589 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-09-19 07:31:31.685655 | orchestrator | 2fada7825ef6 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-19 07:31:31.685666 | orchestrator | 86e4a2b9369e registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-09-19 07:31:31.685677 | orchestrator | b381c2e15c4c registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-09-19 07:31:31.685687 | orchestrator | 23791aea5aa9 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-09-19 07:31:31.685698 | orchestrator | 1c6b4964ce79 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-09-19 07:31:31.685709 | orchestrator | d75843f980d5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-09-19 07:31:31.685720 | orchestrator | 3d058254515d registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-09-19 07:31:31.685731 | orchestrator | 0b326f722549 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-09-19 07:31:31.685742 | orchestrator | f8bd6168a584 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-09-19 07:31:31.685753 | orchestrator | 3085dc4ea328 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-19 07:31:31.685764 | orchestrator | 4b5b305fa5f9 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-09-19 07:31:31.685775 | orchestrator | 892fa951d414 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-09-19 07:31:31.685785 | orchestrator | 839bf63b3232 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 07:31:31.685796 | orchestrator | 71d8587d8175 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 07:31:31.685807 | orchestrator | ae42d726895d registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-19 07:31:31.906723 | orchestrator | 2025-09-19 07:31:31.906842 | orchestrator | ## Images @ testbed-node-0 2025-09-19 07:31:31.906859 | orchestrator | 2025-09-19 07:31:31.906871 | orchestrator | + echo 2025-09-19 07:31:31.906883 | orchestrator | + echo '## Images @ testbed-node-0' 2025-09-19 07:31:31.906894 | orchestrator | + echo 2025-09-19 07:31:31.906906 | orchestrator | + osism container testbed-node-0 images 2025-09-19 07:31:33.960525 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 07:31:33.960673 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 07:31:33.960689 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-19 07:31:33.961400 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-19 07:31:33.961420 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-19 07:31:33.961434 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-19 07:31:33.961468 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-19 07:31:33.961489 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-19 07:31:33.961507 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-19 07:31:33.961525 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 07:31:33.961543 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-19 07:31:33.961563 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 07:31:33.961614 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-19 07:31:33.961632 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-19 07:31:33.961648 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-19 07:31:33.961667 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-19 07:31:33.961685 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 07:31:33.961704 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-19 07:31:33.961723 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 07:31:33.961740 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-19 07:31:33.961759 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-19 07:31:33.961770 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-19 07:31:33.961781 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-19 07:31:33.961791 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-19 07:31:33.961801 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-19 07:31:33.961812 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-19 07:31:33.961823 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-19 07:31:33.961833 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 2 months ago 1.04GB 2025-09-19 07:31:33.961844 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 2 months ago 1.04GB 2025-09-19 07:31:33.961867 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 2 months ago 1.1GB 2025-09-19 07:31:33.961878 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 2 months ago 1.1GB 2025-09-19 07:31:33.961896 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 2 months ago 1.12GB 2025-09-19 07:31:33.961928 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 2 months ago 1.1GB 2025-09-19 07:31:33.961939 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 2 months ago 1.12GB 2025-09-19 07:31:33.961950 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-19 07:31:33.961961 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-19 07:31:33.961971 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-19 07:31:33.961982 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-19 07:31:33.961992 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-19 07:31:33.962003 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-19 07:31:33.962108 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-19 07:31:33.962123 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-19 07:31:33.962134 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-19 07:31:33.962145 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-19 07:31:33.962155 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-19 07:31:33.962166 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-19 07:31:33.962176 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-19 07:31:33.962187 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-19 07:31:33.962197 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-19 07:31:33.962208 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-19 07:31:33.962218 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-19 07:31:33.962229 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-19 07:31:33.962239 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-19 07:31:33.962250 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 2 months ago 1.11GB 2025-09-19 07:31:33.962260 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 2 months ago 1.11GB 2025-09-19 07:31:33.962271 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-19 07:31:33.962291 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-19 07:31:33.962301 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-19 07:31:33.962312 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-19 07:31:33.962329 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 2 months ago 1.04GB 2025-09-19 07:31:33.962341 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 2 months ago 1.04GB 2025-09-19 07:31:33.962352 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 2 months ago 1.04GB 2025-09-19 07:31:33.962362 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 2 months ago 1.04GB 2025-09-19 07:31:33.962374 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-19 07:31:34.236176 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 07:31:34.236359 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 07:31:34.294381 | orchestrator | 2025-09-19 07:31:34.294460 | orchestrator | ## Containers @ testbed-node-1 2025-09-19 07:31:34.294476 | orchestrator | 2025-09-19 07:31:34.294487 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 07:31:34.294499 | orchestrator | + echo 2025-09-19 07:31:34.294510 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-09-19 07:31:34.294522 | orchestrator | + echo 2025-09-19 07:31:34.294533 | orchestrator | + osism container testbed-node-1 ps 2025-09-19 07:31:36.570435 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 07:31:36.570533 | orchestrator | 2a76a374a21f registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-19 07:31:36.570550 | orchestrator | 08c056820a21 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-19 07:31:36.570562 | orchestrator | 50ff25420f0f registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-09-19 07:31:36.570645 | orchestrator | ad20dc2bb14a registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-09-19 07:31:36.570659 | orchestrator | 91d1fe40d813 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-19 07:31:36.570671 | orchestrator | 6a477d78edee registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-09-19 07:31:36.570682 | orchestrator | 7152d5eb9ca1 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-09-19 07:31:36.570693 | orchestrator | edf43f6a6842 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-09-19 07:31:36.570704 | orchestrator | f7a734e9fe49 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-09-19 07:31:36.570737 | orchestrator | 47599184d3bc registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-09-19 07:31:36.570748 | orchestrator | d64019c79625 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-19 07:31:36.570759 | orchestrator | 40c8f68663e8 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-09-19 07:31:36.570770 | orchestrator | 0e94eba6f2f9 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-09-19 07:31:36.570781 | orchestrator | cfb29c993fab registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-19 07:31:36.570792 | orchestrator | 384cd354ce36 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-09-19 07:31:36.570825 | orchestrator | 91621ee91bdd registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-19 07:31:36.570837 | orchestrator | 92f206615464 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-09-19 07:31:36.570849 | orchestrator | 331ddf350c7f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-09-19 07:31:36.570860 | orchestrator | 035f5b6a3db9 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-09-19 07:31:36.570889 | orchestrator | 9653fbed7a5f registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-09-19 07:31:36.570901 | orchestrator | a72a300dba86 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-09-19 07:31:36.570912 | orchestrator | b35405145036 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-09-19 07:31:36.570923 | orchestrator | 3a56c7c9a90b registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-19 07:31:36.570934 | orchestrator | 8d4bbc3df35e registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-09-19 07:31:36.570945 | orchestrator | 74d8abd795f8 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-09-19 07:31:36.570956 | orchestrator | 5b45b78f17ea registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-19 07:31:36.570969 | orchestrator | 4f6ea0af59c3 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-09-19 07:31:36.570989 | orchestrator | 4487a04c9216 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 07:31:36.571002 | orchestrator | 501b128211be registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-09-19 07:31:36.571015 | orchestrator | 57424322ecd4 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-09-19 07:31:36.571028 | orchestrator | 90db0f778c2d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-09-19 07:31:36.571040 | orchestrator | 355ad9d0e76c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-09-19 07:31:36.571053 | orchestrator | 81e3540547d7 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-09-19 07:31:36.571066 | orchestrator | 824e06e28639 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-09-19 07:31:36.571079 | orchestrator | b77bb9f0200e registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-09-19 07:31:36.571091 | orchestrator | 2745e8804125 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-09-19 07:31:36.571103 | orchestrator | 38d05848189a registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-09-19 07:31:36.571116 | orchestrator | 82ba39251680 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-09-19 07:31:36.571143 | orchestrator | 230cb9058b07 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-09-19 07:31:36.571156 | orchestrator | 83f720340d37 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-09-19 07:31:36.571177 | orchestrator | 4604f390bea4 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-09-19 07:31:36.571189 | orchestrator | 6658e23c147c registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-09-19 07:31:36.571203 | orchestrator | 2bc82f33139a registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-19 07:31:36.571216 | orchestrator | 805987edd38a registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-09-19 07:31:36.571229 | orchestrator | 7be10e19672a registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-09-19 07:31:36.571241 | orchestrator | f392bff9c4dd registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-09-19 07:31:36.571263 | orchestrator | 0e800753a090 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-09-19 07:31:36.571276 | orchestrator | d656bb0bd41e registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-09-19 07:31:36.571289 | orchestrator | 97d739be75d5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-09-19 07:31:36.571302 | orchestrator | c7e1e7a912b3 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-09-19 07:31:36.571315 | orchestrator | c2e82a566e37 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-19 07:31:36.571328 | orchestrator | 294172acf2f6 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-09-19 07:31:36.571340 | orchestrator | 2d9e511cf4d8 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-09-19 07:31:36.571351 | orchestrator | 29933c59b91d registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-09-19 07:31:36.571362 | orchestrator | f32758820b34 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 07:31:36.571373 | orchestrator | de2816e91c8b registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 07:31:36.571384 | orchestrator | e07681221d12 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-09-19 07:31:36.852836 | orchestrator | 2025-09-19 07:31:36.852933 | orchestrator | ## Images @ testbed-node-1 2025-09-19 07:31:36.852949 | orchestrator | 2025-09-19 07:31:36.852960 | orchestrator | + echo 2025-09-19 07:31:36.852972 | orchestrator | + echo '## Images @ testbed-node-1' 2025-09-19 07:31:36.852985 | orchestrator | + echo 2025-09-19 07:31:36.852995 | orchestrator | + osism container testbed-node-1 images 2025-09-19 07:31:39.192465 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 07:31:39.192679 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 07:31:39.193442 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-19 07:31:39.193471 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-19 07:31:39.193488 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-19 07:31:39.193505 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-19 07:31:39.193522 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-19 07:31:39.193539 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-19 07:31:39.193618 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 07:31:39.193632 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-19 07:31:39.193656 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-19 07:31:39.193666 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 07:31:39.193676 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-19 07:31:39.193686 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-19 07:31:39.193695 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-19 07:31:39.193705 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-19 07:31:39.193714 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 07:31:39.193724 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-19 07:31:39.193733 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 07:31:39.193743 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-19 07:31:39.193775 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-19 07:31:39.193785 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-19 07:31:39.193799 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-19 07:31:39.193809 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-19 07:31:39.193819 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-19 07:31:39.193828 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-19 07:31:39.193838 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-19 07:31:39.193847 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 2 months ago 1.1GB 2025-09-19 07:31:39.193856 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 2 months ago 1.1GB 2025-09-19 07:31:39.193866 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 2 months ago 1.12GB 2025-09-19 07:31:39.193875 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 2 months ago 1.1GB 2025-09-19 07:31:39.193885 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 2 months ago 1.12GB 2025-09-19 07:31:39.193894 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-19 07:31:39.193904 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-19 07:31:39.193920 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-19 07:31:39.193929 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-19 07:31:39.193939 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-19 07:31:39.193948 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-19 07:31:39.193957 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-19 07:31:39.193967 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-19 07:31:39.193977 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-19 07:31:39.193987 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-19 07:31:39.193997 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-19 07:31:39.194006 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-19 07:31:39.194141 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-19 07:31:39.194157 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-19 07:31:39.194166 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-19 07:31:39.194176 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-19 07:31:39.194185 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-19 07:31:39.194195 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-19 07:31:39.194205 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-19 07:31:39.194214 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-19 07:31:39.194224 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-19 07:31:39.194250 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-19 07:31:39.194260 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-19 07:31:39.194275 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-19 07:31:39.465433 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 07:31:39.466316 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 07:31:39.533439 | orchestrator | 2025-09-19 07:31:39.533519 | orchestrator | ## Containers @ testbed-node-2 2025-09-19 07:31:39.533532 | orchestrator | 2025-09-19 07:31:39.533544 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 07:31:39.533555 | orchestrator | + echo 2025-09-19 07:31:39.533566 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-09-19 07:31:39.533618 | orchestrator | + echo 2025-09-19 07:31:39.533630 | orchestrator | + osism container testbed-node-2 ps 2025-09-19 07:31:41.817031 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 07:31:41.817155 | orchestrator | d33f59144966 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-19 07:31:41.817177 | orchestrator | 31cc965a722e registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2025-09-19 07:31:41.817196 | orchestrator | 3fd15430b739 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-09-19 07:31:41.817214 | orchestrator | 6e7280c27539 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-09-19 07:31:41.817230 | orchestrator | 19040cf26b54 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-19 07:31:41.817266 | orchestrator | e470072d3cae registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-09-19 07:31:41.817287 | orchestrator | d3ba741cc59d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-09-19 07:31:41.817306 | orchestrator | d359ba88ff86 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-09-19 07:31:41.817323 | orchestrator | 4ed4573d9e7e registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-09-19 07:31:41.817334 | orchestrator | 1e4249ac77d9 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-09-19 07:31:41.817345 | orchestrator | 42f280eb50bd registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-19 07:31:41.817356 | orchestrator | 2adc1dd5763f registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-09-19 07:31:41.817367 | orchestrator | 50d262e3224a registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-09-19 07:31:41.817378 | orchestrator | cd1549a50a45 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-19 07:31:41.817388 | orchestrator | aa6d05554878 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-09-19 07:31:41.817399 | orchestrator | 1c10f10c501d registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-19 07:31:41.817410 | orchestrator | f80dd18c82a0 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-09-19 07:31:41.817421 | orchestrator | e299c8a66fcc registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-09-19 07:31:41.817445 | orchestrator | a421cd19096c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-09-19 07:31:41.817475 | orchestrator | bf8477e6e3e1 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-09-19 07:31:41.817487 | orchestrator | 945ffcf81c21 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-09-19 07:31:41.817498 | orchestrator | 7953ec79d3f0 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-09-19 07:31:41.817509 | orchestrator | 12e612a1d94a registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-19 07:31:41.817519 | orchestrator | 9f310e897198 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-09-19 07:31:41.817530 | orchestrator | f5e0379682e8 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-09-19 07:31:41.817542 | orchestrator | 77a22a2f280f registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-19 07:31:41.817555 | orchestrator | f46b5173efeb registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-09-19 07:31:41.817566 | orchestrator | 64a8c148a1c5 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 07:31:41.817609 | orchestrator | 8f3cb2603a34 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-09-19 07:31:41.817642 | orchestrator | 5085fd0ad694 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-09-19 07:31:41.817655 | orchestrator | 8b1be39ea3df registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-09-19 07:31:41.817668 | orchestrator | c1001a156fad registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-09-19 07:31:41.817680 | orchestrator | 2a077942fbbe registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-09-19 07:31:41.817693 | orchestrator | 3dfc89a1600a registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-09-19 07:31:41.817706 | orchestrator | 9525d7c59398 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-09-19 07:31:41.817719 | orchestrator | 4720abeb2ee8 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-09-19 07:31:41.817740 | orchestrator | 9eecf4540408 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-19 07:31:41.817752 | orchestrator | 54fe893c2521 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-09-19 07:31:41.817770 | orchestrator | 70c87d26773f registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-09-19 07:31:41.817783 | orchestrator | e7523d6f5dd1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-09-19 07:31:41.817805 | orchestrator | 55fa7e98790f registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-09-19 07:31:41.817818 | orchestrator | e39876d1ee37 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-19 07:31:41.817836 | orchestrator | 7b5d32754c5f registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-19 07:31:41.817855 | orchestrator | 5feb203aaf9c registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-09-19 07:31:41.817872 | orchestrator | 115d9f8e30f4 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-09-19 07:31:41.817890 | orchestrator | 1069a0e1d031 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-09-19 07:31:41.817908 | orchestrator | 851e79d52478 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-09-19 07:31:41.817926 | orchestrator | fe0782d6de83 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-09-19 07:31:41.817943 | orchestrator | 54a8d1bbae12 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-09-19 07:31:41.817962 | orchestrator | 875d7653a520 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-19 07:31:41.817981 | orchestrator | c8734c1b08f1 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-19 07:31:41.818000 | orchestrator | 653b053e7439 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-09-19 07:31:41.818084 | orchestrator | ce61aeef8923 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-09-19 07:31:41.818098 | orchestrator | 07a4e3c1f2a7 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-09-19 07:31:41.818113 | orchestrator | e50bff9b9563 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 07:31:41.818143 | orchestrator | 5f62d7dd45a9 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 07:31:41.818162 | orchestrator | 558bb6950734 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-19 07:31:42.083269 | orchestrator | 2025-09-19 07:31:42.083347 | orchestrator | ## Images @ testbed-node-2 2025-09-19 07:31:42.083358 | orchestrator | 2025-09-19 07:31:42.083367 | orchestrator | + echo 2025-09-19 07:31:42.083375 | orchestrator | + echo '## Images @ testbed-node-2' 2025-09-19 07:31:42.083384 | orchestrator | + echo 2025-09-19 07:31:42.083392 | orchestrator | + osism container testbed-node-2 images 2025-09-19 07:31:44.282741 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 07:31:44.282858 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 07:31:44.282873 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-19 07:31:44.282885 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-19 07:31:44.282897 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-19 07:31:44.282907 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-19 07:31:44.282918 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-19 07:31:44.282929 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-19 07:31:44.282978 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 07:31:44.282992 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-19 07:31:44.283003 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-19 07:31:44.283014 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 07:31:44.283025 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-19 07:31:44.283061 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-19 07:31:44.283081 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-19 07:31:44.283099 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-19 07:31:44.283116 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 07:31:44.283134 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-19 07:31:44.283151 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 07:31:44.283170 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-19 07:31:44.283188 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-19 07:31:44.283230 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-19 07:31:44.283252 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-19 07:31:44.283273 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-19 07:31:44.283288 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-19 07:31:44.283302 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-19 07:31:44.283314 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-19 07:31:44.283326 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 2 months ago 1.1GB 2025-09-19 07:31:44.283338 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 2 months ago 1.1GB 2025-09-19 07:31:44.283350 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 2 months ago 1.12GB 2025-09-19 07:31:44.283363 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 2 months ago 1.1GB 2025-09-19 07:31:44.283375 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 2 months ago 1.12GB 2025-09-19 07:31:44.283408 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-19 07:31:44.283422 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-19 07:31:44.283435 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-19 07:31:44.283447 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-19 07:31:44.283466 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-19 07:31:44.283483 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-19 07:31:44.283503 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-19 07:31:44.283521 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-19 07:31:44.283540 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-19 07:31:44.283560 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-19 07:31:44.283604 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-19 07:31:44.283619 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-19 07:31:44.283633 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-19 07:31:44.283645 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-19 07:31:44.283656 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-19 07:31:44.283681 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-19 07:31:44.283699 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-19 07:31:44.283719 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-19 07:31:44.283739 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-19 07:31:44.283758 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-19 07:31:44.283778 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-19 07:31:44.283798 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-19 07:31:44.283819 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-19 07:31:44.283840 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-19 07:31:44.585462 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-09-19 07:31:44.595375 | orchestrator | + set -e 2025-09-19 07:31:44.595808 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 07:31:44.596669 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 07:31:44.596692 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 07:31:44.596700 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 07:31:44.596706 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 07:31:44.596713 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 07:31:44.596721 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 07:31:44.596728 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 07:31:44.596734 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 07:31:44.596741 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 07:31:44.596747 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 07:31:44.596752 | orchestrator | ++ export ARA=false 2025-09-19 07:31:44.596758 | orchestrator | ++ ARA=false 2025-09-19 07:31:44.596763 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 07:31:44.596769 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 07:31:44.596774 | orchestrator | ++ export TEMPEST=false 2025-09-19 07:31:44.596779 | orchestrator | ++ TEMPEST=false 2025-09-19 07:31:44.596788 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 07:31:44.596793 | orchestrator | ++ IS_ZUUL=true 2025-09-19 07:31:44.596799 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 07:31:44.596805 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 07:31:44.596810 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 07:31:44.596815 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 07:31:44.596820 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 07:31:44.596825 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 07:31:44.596831 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 07:31:44.596836 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 07:31:44.596841 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 07:31:44.596847 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 07:31:44.596953 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 07:31:44.596963 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-09-19 07:31:44.605091 | orchestrator | + set -e 2025-09-19 07:31:44.605165 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 07:31:44.605181 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 07:31:44.605193 | orchestrator | ++ INTERACTIVE=false 2025-09-19 07:31:44.605204 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 07:31:44.605215 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 07:31:44.605226 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 07:31:44.606174 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 07:31:44.612718 | orchestrator | 2025-09-19 07:31:44.612748 | orchestrator | # Ceph status 2025-09-19 07:31:44.612759 | orchestrator | 2025-09-19 07:31:44.612771 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 07:31:44.612781 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 07:31:44.612819 | orchestrator | + echo 2025-09-19 07:31:44.612830 | orchestrator | + echo '# Ceph status' 2025-09-19 07:31:44.612841 | orchestrator | + echo 2025-09-19 07:31:44.612852 | orchestrator | + ceph -s 2025-09-19 07:31:45.216347 | orchestrator | cluster: 2025-09-19 07:31:45.216440 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-09-19 07:31:45.216456 | orchestrator | health: HEALTH_OK 2025-09-19 07:31:45.216468 | orchestrator | 2025-09-19 07:31:45.216479 | orchestrator | services: 2025-09-19 07:31:45.216491 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-09-19 07:31:45.216503 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-2, testbed-node-1 2025-09-19 07:31:45.216515 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-09-19 07:31:45.216527 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-09-19 07:31:45.216538 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-09-19 07:31:45.216549 | orchestrator | 2025-09-19 07:31:45.216560 | orchestrator | data: 2025-09-19 07:31:45.216571 | orchestrator | volumes: 1/1 healthy 2025-09-19 07:31:45.216628 | orchestrator | pools: 14 pools, 401 pgs 2025-09-19 07:31:45.216640 | orchestrator | objects: 523 objects, 2.2 GiB 2025-09-19 07:31:45.216651 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-09-19 07:31:45.216662 | orchestrator | pgs: 401 active+clean 2025-09-19 07:31:45.216673 | orchestrator | 2025-09-19 07:31:45.273391 | orchestrator | 2025-09-19 07:31:45.273475 | orchestrator | # Ceph versions 2025-09-19 07:31:45.273518 | orchestrator | 2025-09-19 07:31:45.273530 | orchestrator | + echo 2025-09-19 07:31:45.273542 | orchestrator | + echo '# Ceph versions' 2025-09-19 07:31:45.273554 | orchestrator | + echo 2025-09-19 07:31:45.273566 | orchestrator | + ceph versions 2025-09-19 07:31:45.900321 | orchestrator | { 2025-09-19 07:31:45.900422 | orchestrator | "mon": { 2025-09-19 07:31:45.900437 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 07:31:45.900450 | orchestrator | }, 2025-09-19 07:31:45.900462 | orchestrator | "mgr": { 2025-09-19 07:31:45.900473 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 07:31:45.900484 | orchestrator | }, 2025-09-19 07:31:45.900495 | orchestrator | "osd": { 2025-09-19 07:31:45.900505 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-09-19 07:31:45.900516 | orchestrator | }, 2025-09-19 07:31:45.900527 | orchestrator | "mds": { 2025-09-19 07:31:45.900538 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 07:31:45.900548 | orchestrator | }, 2025-09-19 07:31:45.900559 | orchestrator | "rgw": { 2025-09-19 07:31:45.900570 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 07:31:45.900613 | orchestrator | }, 2025-09-19 07:31:45.900625 | orchestrator | "overall": { 2025-09-19 07:31:45.900658 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-09-19 07:31:45.900670 | orchestrator | } 2025-09-19 07:31:45.900681 | orchestrator | } 2025-09-19 07:31:45.947786 | orchestrator | 2025-09-19 07:31:45.947862 | orchestrator | # Ceph OSD tree 2025-09-19 07:31:45.947872 | orchestrator | 2025-09-19 07:31:45.947882 | orchestrator | + echo 2025-09-19 07:31:45.947891 | orchestrator | + echo '# Ceph OSD tree' 2025-09-19 07:31:45.947900 | orchestrator | + echo 2025-09-19 07:31:45.947908 | orchestrator | + ceph osd df tree 2025-09-19 07:31:46.513111 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-09-19 07:31:46.513220 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 434 MiB 113 GiB 5.92 1.00 - root default 2025-09-19 07:31:46.513233 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 147 MiB 38 GiB 5.93 1.00 - host testbed-node-3 2025-09-19 07:31:46.513245 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.73 0.97 199 up osd.0 2025-09-19 07:31:46.513256 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.12 1.03 193 up osd.5 2025-09-19 07:31:46.513267 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-09-19 07:31:46.513278 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.04 1.19 204 up osd.1 2025-09-19 07:31:46.513314 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 980 MiB 907 MiB 1 KiB 74 MiB 19 GiB 4.79 0.81 186 up osd.4 2025-09-19 07:31:46.513325 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-09-19 07:31:46.513336 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.20 1.22 188 up osd.2 2025-09-19 07:31:46.513347 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 948 MiB 875 MiB 1 KiB 74 MiB 19 GiB 4.64 0.78 200 up osd.3 2025-09-19 07:31:46.513357 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 434 MiB 113 GiB 5.92 2025-09-19 07:31:46.513369 | orchestrator | MIN/MAX VAR: 0.78/1.22 STDDEV: 0.99 2025-09-19 07:31:46.565229 | orchestrator | 2025-09-19 07:31:46.565308 | orchestrator | # Ceph monitor status 2025-09-19 07:31:46.565321 | orchestrator | 2025-09-19 07:31:46.565332 | orchestrator | + echo 2025-09-19 07:31:46.565344 | orchestrator | + echo '# Ceph monitor status' 2025-09-19 07:31:46.565355 | orchestrator | + echo 2025-09-19 07:31:46.565366 | orchestrator | + ceph mon stat 2025-09-19 07:31:47.192050 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-09-19 07:31:47.235741 | orchestrator | 2025-09-19 07:31:47.235826 | orchestrator | # Ceph quorum status 2025-09-19 07:31:47.235839 | orchestrator | 2025-09-19 07:31:47.235848 | orchestrator | + echo 2025-09-19 07:31:47.235856 | orchestrator | + echo '# Ceph quorum status' 2025-09-19 07:31:47.235865 | orchestrator | + echo 2025-09-19 07:31:47.236117 | orchestrator | + jq 2025-09-19 07:31:47.236136 | orchestrator | + ceph quorum_status 2025-09-19 07:31:47.904985 | orchestrator | { 2025-09-19 07:31:47.905065 | orchestrator | "election_epoch": 4, 2025-09-19 07:31:47.905076 | orchestrator | "quorum": [ 2025-09-19 07:31:47.905084 | orchestrator | 0, 2025-09-19 07:31:47.905092 | orchestrator | 1, 2025-09-19 07:31:47.905099 | orchestrator | 2 2025-09-19 07:31:47.905106 | orchestrator | ], 2025-09-19 07:31:47.905113 | orchestrator | "quorum_names": [ 2025-09-19 07:31:47.905121 | orchestrator | "testbed-node-0", 2025-09-19 07:31:47.905128 | orchestrator | "testbed-node-1", 2025-09-19 07:31:47.905135 | orchestrator | "testbed-node-2" 2025-09-19 07:31:47.905142 | orchestrator | ], 2025-09-19 07:31:47.905150 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-09-19 07:31:47.905158 | orchestrator | "quorum_age": 1643, 2025-09-19 07:31:47.905166 | orchestrator | "features": { 2025-09-19 07:31:47.905173 | orchestrator | "quorum_con": "4540138322906710015", 2025-09-19 07:31:47.905180 | orchestrator | "quorum_mon": [ 2025-09-19 07:31:47.905187 | orchestrator | "kraken", 2025-09-19 07:31:47.905194 | orchestrator | "luminous", 2025-09-19 07:31:47.905202 | orchestrator | "mimic", 2025-09-19 07:31:47.905209 | orchestrator | "osdmap-prune", 2025-09-19 07:31:47.905216 | orchestrator | "nautilus", 2025-09-19 07:31:47.905223 | orchestrator | "octopus", 2025-09-19 07:31:47.905230 | orchestrator | "pacific", 2025-09-19 07:31:47.905237 | orchestrator | "elector-pinging", 2025-09-19 07:31:47.905244 | orchestrator | "quincy", 2025-09-19 07:31:47.905252 | orchestrator | "reef" 2025-09-19 07:31:47.905259 | orchestrator | ] 2025-09-19 07:31:47.905266 | orchestrator | }, 2025-09-19 07:31:47.905274 | orchestrator | "monmap": { 2025-09-19 07:31:47.905281 | orchestrator | "epoch": 1, 2025-09-19 07:31:47.905288 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-09-19 07:31:47.905296 | orchestrator | "modified": "2025-09-19T07:04:12.664286Z", 2025-09-19 07:31:47.905303 | orchestrator | "created": "2025-09-19T07:04:12.664286Z", 2025-09-19 07:31:47.905310 | orchestrator | "min_mon_release": 18, 2025-09-19 07:31:47.905331 | orchestrator | "min_mon_release_name": "reef", 2025-09-19 07:31:47.905339 | orchestrator | "election_strategy": 1, 2025-09-19 07:31:47.905346 | orchestrator | "disallowed_leaders: ": "", 2025-09-19 07:31:47.905353 | orchestrator | "stretch_mode": false, 2025-09-19 07:31:47.905361 | orchestrator | "tiebreaker_mon": "", 2025-09-19 07:31:47.905368 | orchestrator | "removed_ranks: ": "", 2025-09-19 07:31:47.905375 | orchestrator | "features": { 2025-09-19 07:31:47.905402 | orchestrator | "persistent": [ 2025-09-19 07:31:47.905410 | orchestrator | "kraken", 2025-09-19 07:31:47.905417 | orchestrator | "luminous", 2025-09-19 07:31:47.905424 | orchestrator | "mimic", 2025-09-19 07:31:47.905431 | orchestrator | "osdmap-prune", 2025-09-19 07:31:47.905438 | orchestrator | "nautilus", 2025-09-19 07:31:47.905445 | orchestrator | "octopus", 2025-09-19 07:31:47.905452 | orchestrator | "pacific", 2025-09-19 07:31:47.905459 | orchestrator | "elector-pinging", 2025-09-19 07:31:47.905466 | orchestrator | "quincy", 2025-09-19 07:31:47.905473 | orchestrator | "reef" 2025-09-19 07:31:47.905481 | orchestrator | ], 2025-09-19 07:31:47.905488 | orchestrator | "optional": [] 2025-09-19 07:31:47.905495 | orchestrator | }, 2025-09-19 07:31:47.905502 | orchestrator | "mons": [ 2025-09-19 07:31:47.905509 | orchestrator | { 2025-09-19 07:31:47.905517 | orchestrator | "rank": 0, 2025-09-19 07:31:47.905524 | orchestrator | "name": "testbed-node-0", 2025-09-19 07:31:47.905531 | orchestrator | "public_addrs": { 2025-09-19 07:31:47.905538 | orchestrator | "addrvec": [ 2025-09-19 07:31:47.905545 | orchestrator | { 2025-09-19 07:31:47.905553 | orchestrator | "type": "v2", 2025-09-19 07:31:47.905562 | orchestrator | "addr": "192.168.16.10:3300", 2025-09-19 07:31:47.905570 | orchestrator | "nonce": 0 2025-09-19 07:31:47.905578 | orchestrator | }, 2025-09-19 07:31:47.905609 | orchestrator | { 2025-09-19 07:31:47.905617 | orchestrator | "type": "v1", 2025-09-19 07:31:47.905625 | orchestrator | "addr": "192.168.16.10:6789", 2025-09-19 07:31:47.905633 | orchestrator | "nonce": 0 2025-09-19 07:31:47.905641 | orchestrator | } 2025-09-19 07:31:47.905650 | orchestrator | ] 2025-09-19 07:31:47.905658 | orchestrator | }, 2025-09-19 07:31:47.905666 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-09-19 07:31:47.905674 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-09-19 07:31:47.905682 | orchestrator | "priority": 0, 2025-09-19 07:31:47.905690 | orchestrator | "weight": 0, 2025-09-19 07:31:47.905699 | orchestrator | "crush_location": "{}" 2025-09-19 07:31:47.905707 | orchestrator | }, 2025-09-19 07:31:47.905715 | orchestrator | { 2025-09-19 07:31:47.905723 | orchestrator | "rank": 1, 2025-09-19 07:31:47.905731 | orchestrator | "name": "testbed-node-1", 2025-09-19 07:31:47.905740 | orchestrator | "public_addrs": { 2025-09-19 07:31:47.905748 | orchestrator | "addrvec": [ 2025-09-19 07:31:47.905756 | orchestrator | { 2025-09-19 07:31:47.905764 | orchestrator | "type": "v2", 2025-09-19 07:31:47.905772 | orchestrator | "addr": "192.168.16.11:3300", 2025-09-19 07:31:47.905780 | orchestrator | "nonce": 0 2025-09-19 07:31:47.905788 | orchestrator | }, 2025-09-19 07:31:47.905797 | orchestrator | { 2025-09-19 07:31:47.905805 | orchestrator | "type": "v1", 2025-09-19 07:31:47.905813 | orchestrator | "addr": "192.168.16.11:6789", 2025-09-19 07:31:47.905820 | orchestrator | "nonce": 0 2025-09-19 07:31:47.905828 | orchestrator | } 2025-09-19 07:31:47.905837 | orchestrator | ] 2025-09-19 07:31:47.905845 | orchestrator | }, 2025-09-19 07:31:47.905853 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-09-19 07:31:47.905861 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-09-19 07:31:47.905869 | orchestrator | "priority": 0, 2025-09-19 07:31:47.905877 | orchestrator | "weight": 0, 2025-09-19 07:31:47.905886 | orchestrator | "crush_location": "{}" 2025-09-19 07:31:47.905894 | orchestrator | }, 2025-09-19 07:31:47.905902 | orchestrator | { 2025-09-19 07:31:47.905910 | orchestrator | "rank": 2, 2025-09-19 07:31:47.905917 | orchestrator | "name": "testbed-node-2", 2025-09-19 07:31:47.905924 | orchestrator | "public_addrs": { 2025-09-19 07:31:47.905931 | orchestrator | "addrvec": [ 2025-09-19 07:31:47.905938 | orchestrator | { 2025-09-19 07:31:47.905946 | orchestrator | "type": "v2", 2025-09-19 07:31:47.905953 | orchestrator | "addr": "192.168.16.12:3300", 2025-09-19 07:31:47.905960 | orchestrator | "nonce": 0 2025-09-19 07:31:47.905967 | orchestrator | }, 2025-09-19 07:31:47.905974 | orchestrator | { 2025-09-19 07:31:47.905981 | orchestrator | "type": "v1", 2025-09-19 07:31:47.905988 | orchestrator | "addr": "192.168.16.12:6789", 2025-09-19 07:31:47.905995 | orchestrator | "nonce": 0 2025-09-19 07:31:47.906003 | orchestrator | } 2025-09-19 07:31:47.906010 | orchestrator | ] 2025-09-19 07:31:47.906068 | orchestrator | }, 2025-09-19 07:31:47.906116 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-09-19 07:31:47.906130 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-09-19 07:31:47.906138 | orchestrator | "priority": 0, 2025-09-19 07:31:47.906145 | orchestrator | "weight": 0, 2025-09-19 07:31:47.906152 | orchestrator | "crush_location": "{}" 2025-09-19 07:31:47.906159 | orchestrator | } 2025-09-19 07:31:47.906166 | orchestrator | ] 2025-09-19 07:31:47.906173 | orchestrator | } 2025-09-19 07:31:47.906180 | orchestrator | } 2025-09-19 07:31:47.906367 | orchestrator | 2025-09-19 07:31:47.906454 | orchestrator | # Ceph free space status 2025-09-19 07:31:47.906470 | orchestrator | 2025-09-19 07:31:47.906482 | orchestrator | + echo 2025-09-19 07:31:47.906511 | orchestrator | + echo '# Ceph free space status' 2025-09-19 07:31:47.906533 | orchestrator | + echo 2025-09-19 07:31:47.906544 | orchestrator | + ceph df 2025-09-19 07:31:48.514281 | orchestrator | --- RAW STORAGE --- 2025-09-19 07:31:48.514378 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-09-19 07:31:48.514406 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-19 07:31:48.514417 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-19 07:31:48.514429 | orchestrator | 2025-09-19 07:31:48.514441 | orchestrator | --- POOLS --- 2025-09-19 07:31:48.514453 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-09-19 07:31:48.514466 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-09-19 07:31:48.514477 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-09-19 07:31:48.514488 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-09-19 07:31:48.514499 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-09-19 07:31:48.514509 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-09-19 07:31:48.514520 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-09-19 07:31:48.514531 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-09-19 07:31:48.514542 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-09-19 07:31:48.514552 | orchestrator | .rgw.root 9 32 3.0 KiB 7 56 KiB 0 53 GiB 2025-09-19 07:31:48.514563 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 07:31:48.514574 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 07:31:48.514622 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2025-09-19 07:31:48.514633 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 07:31:48.514644 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 07:31:48.566556 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 07:31:48.630373 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 07:31:48.630441 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-09-19 07:31:48.630454 | orchestrator | + osism apply facts 2025-09-19 07:32:00.682243 | orchestrator | 2025-09-19 07:32:00 | INFO  | Task 94bf69b6-e1aa-4de7-8b03-afc36ab763bf (facts) was prepared for execution. 2025-09-19 07:32:00.682350 | orchestrator | 2025-09-19 07:32:00 | INFO  | It takes a moment until task 94bf69b6-e1aa-4de7-8b03-afc36ab763bf (facts) has been started and output is visible here. 2025-09-19 07:32:13.743470 | orchestrator | 2025-09-19 07:32:13.743586 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 07:32:13.743655 | orchestrator | 2025-09-19 07:32:13.743669 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 07:32:13.743681 | orchestrator | Friday 19 September 2025 07:32:04 +0000 (0:00:00.244) 0:00:00.244 ****** 2025-09-19 07:32:13.743693 | orchestrator | ok: [testbed-manager] 2025-09-19 07:32:13.743705 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:13.743716 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:32:13.743727 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:32:13.743737 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:32:13.743748 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:32:13.743759 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:32:13.743769 | orchestrator | 2025-09-19 07:32:13.743806 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 07:32:13.743817 | orchestrator | Friday 19 September 2025 07:32:05 +0000 (0:00:01.360) 0:00:01.605 ****** 2025-09-19 07:32:13.743828 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:32:13.743840 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:13.743850 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:32:13.743861 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:32:13.743872 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:32:13.743882 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:32:13.743893 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:32:13.743903 | orchestrator | 2025-09-19 07:32:13.743914 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 07:32:13.743925 | orchestrator | 2025-09-19 07:32:13.743936 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 07:32:13.743946 | orchestrator | Friday 19 September 2025 07:32:06 +0000 (0:00:01.158) 0:00:02.763 ****** 2025-09-19 07:32:13.743957 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:32:13.743968 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:32:13.743978 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:13.743989 | orchestrator | ok: [testbed-manager] 2025-09-19 07:32:13.744000 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:32:13.744014 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:32:13.744026 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:32:13.744038 | orchestrator | 2025-09-19 07:32:13.744051 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 07:32:13.744063 | orchestrator | 2025-09-19 07:32:13.744075 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 07:32:13.744088 | orchestrator | Friday 19 September 2025 07:32:12 +0000 (0:00:06.070) 0:00:08.834 ****** 2025-09-19 07:32:13.744100 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:32:13.744113 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:13.744125 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:32:13.744137 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:32:13.744149 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:32:13.744162 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:32:13.744174 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:32:13.744187 | orchestrator | 2025-09-19 07:32:13.744200 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:32:13.744229 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:32:13.744242 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:32:13.744253 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:32:13.744264 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:32:13.744274 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:32:13.744285 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:32:13.744296 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:32:13.744307 | orchestrator | 2025-09-19 07:32:13.744317 | orchestrator | 2025-09-19 07:32:13.744328 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:32:13.744339 | orchestrator | Friday 19 September 2025 07:32:13 +0000 (0:00:00.533) 0:00:09.368 ****** 2025-09-19 07:32:13.744350 | orchestrator | =============================================================================== 2025-09-19 07:32:13.744368 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.07s 2025-09-19 07:32:13.744378 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.36s 2025-09-19 07:32:13.744389 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.16s 2025-09-19 07:32:13.744401 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-09-19 07:32:13.919184 | orchestrator | + osism validate ceph-mons 2025-09-19 07:32:44.752420 | orchestrator | 2025-09-19 07:32:44.752541 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-09-19 07:32:44.752559 | orchestrator | 2025-09-19 07:32:44.752571 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 07:32:44.752583 | orchestrator | Friday 19 September 2025 07:32:29 +0000 (0:00:00.429) 0:00:00.429 ****** 2025-09-19 07:32:44.752596 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:32:44.752608 | orchestrator | 2025-09-19 07:32:44.752619 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 07:32:44.752695 | orchestrator | Friday 19 September 2025 07:32:30 +0000 (0:00:00.657) 0:00:01.086 ****** 2025-09-19 07:32:44.752708 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:32:44.752719 | orchestrator | 2025-09-19 07:32:44.752746 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 07:32:44.752758 | orchestrator | Friday 19 September 2025 07:32:31 +0000 (0:00:00.825) 0:00:01.912 ****** 2025-09-19 07:32:44.752769 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.752781 | orchestrator | 2025-09-19 07:32:44.752792 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-19 07:32:44.752803 | orchestrator | Friday 19 September 2025 07:32:31 +0000 (0:00:00.257) 0:00:02.169 ****** 2025-09-19 07:32:44.752814 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.752825 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:32:44.752836 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:32:44.752847 | orchestrator | 2025-09-19 07:32:44.752858 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-19 07:32:44.752869 | orchestrator | Friday 19 September 2025 07:32:31 +0000 (0:00:00.290) 0:00:02.460 ****** 2025-09-19 07:32:44.752880 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:32:44.752891 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.752902 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:32:44.752915 | orchestrator | 2025-09-19 07:32:44.752928 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-19 07:32:44.752940 | orchestrator | Friday 19 September 2025 07:32:32 +0000 (0:00:00.989) 0:00:03.449 ****** 2025-09-19 07:32:44.752953 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.752966 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:32:44.752978 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:32:44.752991 | orchestrator | 2025-09-19 07:32:44.753003 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-19 07:32:44.753016 | orchestrator | Friday 19 September 2025 07:32:33 +0000 (0:00:00.315) 0:00:03.764 ****** 2025-09-19 07:32:44.753028 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.753040 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:32:44.753053 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:32:44.753065 | orchestrator | 2025-09-19 07:32:44.753078 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 07:32:44.753091 | orchestrator | Friday 19 September 2025 07:32:33 +0000 (0:00:00.477) 0:00:04.242 ****** 2025-09-19 07:32:44.753103 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.753116 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:32:44.753128 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:32:44.753140 | orchestrator | 2025-09-19 07:32:44.753152 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-09-19 07:32:44.753164 | orchestrator | Friday 19 September 2025 07:32:34 +0000 (0:00:00.299) 0:00:04.541 ****** 2025-09-19 07:32:44.753201 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.753215 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:32:44.753227 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:32:44.753239 | orchestrator | 2025-09-19 07:32:44.753250 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-09-19 07:32:44.753262 | orchestrator | Friday 19 September 2025 07:32:34 +0000 (0:00:00.314) 0:00:04.856 ****** 2025-09-19 07:32:44.753272 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.753283 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:32:44.753294 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:32:44.753304 | orchestrator | 2025-09-19 07:32:44.753315 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 07:32:44.753326 | orchestrator | Friday 19 September 2025 07:32:34 +0000 (0:00:00.307) 0:00:05.163 ****** 2025-09-19 07:32:44.753336 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.753347 | orchestrator | 2025-09-19 07:32:44.753358 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 07:32:44.753368 | orchestrator | Friday 19 September 2025 07:32:35 +0000 (0:00:00.645) 0:00:05.809 ****** 2025-09-19 07:32:44.753379 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.753389 | orchestrator | 2025-09-19 07:32:44.753400 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 07:32:44.753411 | orchestrator | Friday 19 September 2025 07:32:35 +0000 (0:00:00.271) 0:00:06.080 ****** 2025-09-19 07:32:44.753422 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.753432 | orchestrator | 2025-09-19 07:32:44.753443 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:32:44.753454 | orchestrator | Friday 19 September 2025 07:32:35 +0000 (0:00:00.259) 0:00:06.340 ****** 2025-09-19 07:32:44.753464 | orchestrator | 2025-09-19 07:32:44.753475 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:32:44.753486 | orchestrator | Friday 19 September 2025 07:32:35 +0000 (0:00:00.069) 0:00:06.409 ****** 2025-09-19 07:32:44.753497 | orchestrator | 2025-09-19 07:32:44.753507 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:32:44.753518 | orchestrator | Friday 19 September 2025 07:32:35 +0000 (0:00:00.073) 0:00:06.482 ****** 2025-09-19 07:32:44.753528 | orchestrator | 2025-09-19 07:32:44.753539 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 07:32:44.753550 | orchestrator | Friday 19 September 2025 07:32:36 +0000 (0:00:00.081) 0:00:06.564 ****** 2025-09-19 07:32:44.753561 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.753571 | orchestrator | 2025-09-19 07:32:44.753583 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-19 07:32:44.753594 | orchestrator | Friday 19 September 2025 07:32:36 +0000 (0:00:00.292) 0:00:06.856 ****** 2025-09-19 07:32:44.753604 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.753615 | orchestrator | 2025-09-19 07:32:44.753665 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-09-19 07:32:44.753677 | orchestrator | Friday 19 September 2025 07:32:36 +0000 (0:00:00.250) 0:00:07.107 ****** 2025-09-19 07:32:44.753688 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.753699 | orchestrator | 2025-09-19 07:32:44.753710 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-09-19 07:32:44.753720 | orchestrator | Friday 19 September 2025 07:32:36 +0000 (0:00:00.123) 0:00:07.230 ****** 2025-09-19 07:32:44.753731 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:32:44.753742 | orchestrator | 2025-09-19 07:32:44.753753 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-09-19 07:32:44.753763 | orchestrator | Friday 19 September 2025 07:32:38 +0000 (0:00:01.501) 0:00:08.731 ****** 2025-09-19 07:32:44.753774 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.753785 | orchestrator | 2025-09-19 07:32:44.753795 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-09-19 07:32:44.753815 | orchestrator | Friday 19 September 2025 07:32:38 +0000 (0:00:00.296) 0:00:09.028 ****** 2025-09-19 07:32:44.753826 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.753836 | orchestrator | 2025-09-19 07:32:44.753847 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-09-19 07:32:44.753858 | orchestrator | Friday 19 September 2025 07:32:38 +0000 (0:00:00.299) 0:00:09.328 ****** 2025-09-19 07:32:44.753869 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.753880 | orchestrator | 2025-09-19 07:32:44.753891 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-09-19 07:32:44.753901 | orchestrator | Friday 19 September 2025 07:32:39 +0000 (0:00:00.317) 0:00:09.646 ****** 2025-09-19 07:32:44.753912 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.753922 | orchestrator | 2025-09-19 07:32:44.753933 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-09-19 07:32:44.753944 | orchestrator | Friday 19 September 2025 07:32:39 +0000 (0:00:00.315) 0:00:09.961 ****** 2025-09-19 07:32:44.753954 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.753965 | orchestrator | 2025-09-19 07:32:44.753976 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-09-19 07:32:44.753987 | orchestrator | Friday 19 September 2025 07:32:39 +0000 (0:00:00.111) 0:00:10.073 ****** 2025-09-19 07:32:44.753997 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.754008 | orchestrator | 2025-09-19 07:32:44.754150 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-09-19 07:32:44.754164 | orchestrator | Friday 19 September 2025 07:32:39 +0000 (0:00:00.125) 0:00:10.198 ****** 2025-09-19 07:32:44.754175 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.754186 | orchestrator | 2025-09-19 07:32:44.754197 | orchestrator | TASK [Gather status data] ****************************************************** 2025-09-19 07:32:44.754207 | orchestrator | Friday 19 September 2025 07:32:39 +0000 (0:00:00.126) 0:00:10.325 ****** 2025-09-19 07:32:44.754218 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:32:44.754229 | orchestrator | 2025-09-19 07:32:44.754239 | orchestrator | TASK [Set health test data] **************************************************** 2025-09-19 07:32:44.754250 | orchestrator | Friday 19 September 2025 07:32:41 +0000 (0:00:01.389) 0:00:11.715 ****** 2025-09-19 07:32:44.754261 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.754272 | orchestrator | 2025-09-19 07:32:44.754282 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-09-19 07:32:44.754293 | orchestrator | Friday 19 September 2025 07:32:41 +0000 (0:00:00.272) 0:00:11.988 ****** 2025-09-19 07:32:44.754304 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.754315 | orchestrator | 2025-09-19 07:32:44.754326 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-09-19 07:32:44.754337 | orchestrator | Friday 19 September 2025 07:32:41 +0000 (0:00:00.121) 0:00:12.110 ****** 2025-09-19 07:32:44.754347 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:32:44.754358 | orchestrator | 2025-09-19 07:32:44.754369 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-09-19 07:32:44.754380 | orchestrator | Friday 19 September 2025 07:32:41 +0000 (0:00:00.131) 0:00:12.241 ****** 2025-09-19 07:32:44.754391 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.754401 | orchestrator | 2025-09-19 07:32:44.754412 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-09-19 07:32:44.754423 | orchestrator | Friday 19 September 2025 07:32:41 +0000 (0:00:00.122) 0:00:12.363 ****** 2025-09-19 07:32:44.754433 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.754444 | orchestrator | 2025-09-19 07:32:44.754455 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 07:32:44.754465 | orchestrator | Friday 19 September 2025 07:32:42 +0000 (0:00:00.241) 0:00:12.605 ****** 2025-09-19 07:32:44.754476 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:32:44.754487 | orchestrator | 2025-09-19 07:32:44.754498 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 07:32:44.754517 | orchestrator | Friday 19 September 2025 07:32:42 +0000 (0:00:00.233) 0:00:12.839 ****** 2025-09-19 07:32:44.754528 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:32:44.754539 | orchestrator | 2025-09-19 07:32:44.754549 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 07:32:44.754560 | orchestrator | Friday 19 September 2025 07:32:42 +0000 (0:00:00.231) 0:00:13.071 ****** 2025-09-19 07:32:44.754571 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:32:44.754582 | orchestrator | 2025-09-19 07:32:44.754593 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 07:32:44.754604 | orchestrator | Friday 19 September 2025 07:32:44 +0000 (0:00:01.513) 0:00:14.585 ****** 2025-09-19 07:32:44.754614 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:32:44.754647 | orchestrator | 2025-09-19 07:32:44.754667 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 07:32:44.754686 | orchestrator | Friday 19 September 2025 07:32:44 +0000 (0:00:00.229) 0:00:14.814 ****** 2025-09-19 07:32:44.754706 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:32:44.754718 | orchestrator | 2025-09-19 07:32:44.754739 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:32:46.674138 | orchestrator | Friday 19 September 2025 07:32:44 +0000 (0:00:00.214) 0:00:15.028 ****** 2025-09-19 07:32:46.674238 | orchestrator | 2025-09-19 07:32:46.674254 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:32:46.674268 | orchestrator | Friday 19 September 2025 07:32:44 +0000 (0:00:00.065) 0:00:15.094 ****** 2025-09-19 07:32:46.674284 | orchestrator | 2025-09-19 07:32:46.674296 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:32:46.674328 | orchestrator | Friday 19 September 2025 07:32:44 +0000 (0:00:00.072) 0:00:15.167 ****** 2025-09-19 07:32:46.674341 | orchestrator | 2025-09-19 07:32:46.674352 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 07:32:46.674367 | orchestrator | Friday 19 September 2025 07:32:44 +0000 (0:00:00.068) 0:00:15.236 ****** 2025-09-19 07:32:46.674380 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:32:46.674391 | orchestrator | 2025-09-19 07:32:46.674402 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 07:32:46.674414 | orchestrator | Friday 19 September 2025 07:32:45 +0000 (0:00:01.256) 0:00:16.492 ****** 2025-09-19 07:32:46.674425 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-19 07:32:46.674436 | orchestrator |  "msg": [ 2025-09-19 07:32:46.674449 | orchestrator |  "Validator run completed.", 2025-09-19 07:32:46.674461 | orchestrator |  "You can find the report file here:", 2025-09-19 07:32:46.674473 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-09-19T07:32:30+00:00-report.json", 2025-09-19 07:32:46.674485 | orchestrator |  "on the following host:", 2025-09-19 07:32:46.674497 | orchestrator |  "testbed-manager" 2025-09-19 07:32:46.674508 | orchestrator |  ] 2025-09-19 07:32:46.674520 | orchestrator | } 2025-09-19 07:32:46.674532 | orchestrator | 2025-09-19 07:32:46.674543 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:32:46.674556 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 07:32:46.674569 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:32:46.674581 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:32:46.674594 | orchestrator | 2025-09-19 07:32:46.674607 | orchestrator | 2025-09-19 07:32:46.674620 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:32:46.674679 | orchestrator | Friday 19 September 2025 07:32:46 +0000 (0:00:00.481) 0:00:16.974 ****** 2025-09-19 07:32:46.674692 | orchestrator | =============================================================================== 2025-09-19 07:32:46.674705 | orchestrator | Aggregate test results step one ----------------------------------------- 1.51s 2025-09-19 07:32:46.674717 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.50s 2025-09-19 07:32:46.674730 | orchestrator | Gather status data ------------------------------------------------------ 1.39s 2025-09-19 07:32:46.674742 | orchestrator | Write report file ------------------------------------------------------- 1.26s 2025-09-19 07:32:46.674755 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2025-09-19 07:32:46.674767 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-09-19 07:32:46.674780 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-09-19 07:32:46.674792 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-09-19 07:32:46.674804 | orchestrator | Print report file information ------------------------------------------- 0.48s 2025-09-19 07:32:46.674816 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2025-09-19 07:32:46.674829 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2025-09-19 07:32:46.674841 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2025-09-19 07:32:46.674854 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2025-09-19 07:32:46.674866 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2025-09-19 07:32:46.674877 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2025-09-19 07:32:46.674887 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.30s 2025-09-19 07:32:46.674898 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-09-19 07:32:46.674909 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2025-09-19 07:32:46.674920 | orchestrator | Print report file information ------------------------------------------- 0.29s 2025-09-19 07:32:46.674931 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-09-19 07:32:46.874724 | orchestrator | + osism validate ceph-mgrs 2025-09-19 07:33:16.934419 | orchestrator | 2025-09-19 07:33:16.934530 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-09-19 07:33:16.934546 | orchestrator | 2025-09-19 07:33:16.934558 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 07:33:16.934570 | orchestrator | Friday 19 September 2025 07:33:02 +0000 (0:00:00.402) 0:00:00.402 ****** 2025-09-19 07:33:16.934582 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:16.934593 | orchestrator | 2025-09-19 07:33:16.934604 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 07:33:16.934615 | orchestrator | Friday 19 September 2025 07:33:03 +0000 (0:00:00.553) 0:00:00.956 ****** 2025-09-19 07:33:16.934626 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:16.934637 | orchestrator | 2025-09-19 07:33:16.934708 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 07:33:16.934723 | orchestrator | Friday 19 September 2025 07:33:03 +0000 (0:00:00.705) 0:00:01.661 ****** 2025-09-19 07:33:16.934734 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:33:16.934745 | orchestrator | 2025-09-19 07:33:16.934756 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-19 07:33:16.934767 | orchestrator | Friday 19 September 2025 07:33:04 +0000 (0:00:00.198) 0:00:01.859 ****** 2025-09-19 07:33:16.934778 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:33:16.934789 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:33:16.934820 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:33:16.934832 | orchestrator | 2025-09-19 07:33:16.934843 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-19 07:33:16.934877 | orchestrator | Friday 19 September 2025 07:33:04 +0000 (0:00:00.270) 0:00:02.130 ****** 2025-09-19 07:33:16.934888 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:33:16.934899 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:33:16.934910 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:33:16.934920 | orchestrator | 2025-09-19 07:33:16.934931 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-19 07:33:16.934941 | orchestrator | Friday 19 September 2025 07:33:05 +0000 (0:00:00.918) 0:00:03.049 ****** 2025-09-19 07:33:16.934953 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:33:16.934964 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:33:16.934975 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:33:16.934986 | orchestrator | 2025-09-19 07:33:16.934996 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-19 07:33:16.935007 | orchestrator | Friday 19 September 2025 07:33:05 +0000 (0:00:00.279) 0:00:03.328 ****** 2025-09-19 07:33:16.935018 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:33:16.935029 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:33:16.935039 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:33:16.935050 | orchestrator | 2025-09-19 07:33:16.935060 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 07:33:16.935071 | orchestrator | Friday 19 September 2025 07:33:06 +0000 (0:00:00.482) 0:00:03.810 ****** 2025-09-19 07:33:16.935082 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:33:16.935092 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:33:16.935103 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:33:16.935114 | orchestrator | 2025-09-19 07:33:16.935124 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-09-19 07:33:16.935135 | orchestrator | Friday 19 September 2025 07:33:06 +0000 (0:00:00.348) 0:00:04.158 ****** 2025-09-19 07:33:16.935146 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:33:16.935156 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:33:16.935167 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:33:16.935177 | orchestrator | 2025-09-19 07:33:16.935188 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-09-19 07:33:16.935199 | orchestrator | Friday 19 September 2025 07:33:06 +0000 (0:00:00.304) 0:00:04.463 ****** 2025-09-19 07:33:16.935209 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:33:16.935220 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:33:16.935231 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:33:16.935241 | orchestrator | 2025-09-19 07:33:16.935252 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 07:33:16.935262 | orchestrator | Friday 19 September 2025 07:33:06 +0000 (0:00:00.308) 0:00:04.771 ****** 2025-09-19 07:33:16.935273 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:33:16.935284 | orchestrator | 2025-09-19 07:33:16.935294 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 07:33:16.935305 | orchestrator | Friday 19 September 2025 07:33:07 +0000 (0:00:00.659) 0:00:05.431 ****** 2025-09-19 07:33:16.935315 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:33:16.935326 | orchestrator | 2025-09-19 07:33:16.935337 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 07:33:16.935347 | orchestrator | Friday 19 September 2025 07:33:07 +0000 (0:00:00.262) 0:00:05.693 ****** 2025-09-19 07:33:16.935358 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:33:16.935368 | orchestrator | 2025-09-19 07:33:16.935379 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:16.935390 | orchestrator | Friday 19 September 2025 07:33:08 +0000 (0:00:00.270) 0:00:05.964 ****** 2025-09-19 07:33:16.935400 | orchestrator | 2025-09-19 07:33:16.935411 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:16.935422 | orchestrator | Friday 19 September 2025 07:33:08 +0000 (0:00:00.073) 0:00:06.037 ****** 2025-09-19 07:33:16.935432 | orchestrator | 2025-09-19 07:33:16.935443 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:16.935461 | orchestrator | Friday 19 September 2025 07:33:08 +0000 (0:00:00.070) 0:00:06.108 ****** 2025-09-19 07:33:16.935472 | orchestrator | 2025-09-19 07:33:16.935483 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 07:33:16.935494 | orchestrator | Friday 19 September 2025 07:33:08 +0000 (0:00:00.071) 0:00:06.179 ****** 2025-09-19 07:33:16.935504 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:33:16.935515 | orchestrator | 2025-09-19 07:33:16.935526 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-19 07:33:16.935537 | orchestrator | Friday 19 September 2025 07:33:08 +0000 (0:00:00.248) 0:00:06.428 ****** 2025-09-19 07:33:16.935548 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:33:16.935559 | orchestrator | 2025-09-19 07:33:16.935589 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-09-19 07:33:16.935600 | orchestrator | Friday 19 September 2025 07:33:08 +0000 (0:00:00.247) 0:00:06.675 ****** 2025-09-19 07:33:16.935611 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:33:16.935622 | orchestrator | 2025-09-19 07:33:16.935633 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-09-19 07:33:16.935643 | orchestrator | Friday 19 September 2025 07:33:09 +0000 (0:00:00.119) 0:00:06.794 ****** 2025-09-19 07:33:16.935672 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:33:16.935683 | orchestrator | 2025-09-19 07:33:16.935694 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-09-19 07:33:16.935704 | orchestrator | Friday 19 September 2025 07:33:11 +0000 (0:00:01.996) 0:00:08.791 ****** 2025-09-19 07:33:16.935715 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:33:16.935725 | orchestrator | 2025-09-19 07:33:16.935736 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-09-19 07:33:16.935746 | orchestrator | Friday 19 September 2025 07:33:11 +0000 (0:00:00.238) 0:00:09.029 ****** 2025-09-19 07:33:16.935757 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:33:16.935767 | orchestrator | 2025-09-19 07:33:16.935778 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-09-19 07:33:16.935789 | orchestrator | Friday 19 September 2025 07:33:11 +0000 (0:00:00.530) 0:00:09.560 ****** 2025-09-19 07:33:16.935799 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:33:16.935810 | orchestrator | 2025-09-19 07:33:16.935821 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-09-19 07:33:16.935832 | orchestrator | Friday 19 September 2025 07:33:11 +0000 (0:00:00.131) 0:00:09.691 ****** 2025-09-19 07:33:16.935842 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:33:16.935853 | orchestrator | 2025-09-19 07:33:16.935864 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 07:33:16.935874 | orchestrator | Friday 19 September 2025 07:33:12 +0000 (0:00:00.134) 0:00:09.825 ****** 2025-09-19 07:33:16.935885 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:16.935896 | orchestrator | 2025-09-19 07:33:16.935907 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 07:33:16.935917 | orchestrator | Friday 19 September 2025 07:33:12 +0000 (0:00:00.248) 0:00:10.073 ****** 2025-09-19 07:33:16.935928 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:33:16.935939 | orchestrator | 2025-09-19 07:33:16.935949 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 07:33:16.935960 | orchestrator | Friday 19 September 2025 07:33:12 +0000 (0:00:00.267) 0:00:10.341 ****** 2025-09-19 07:33:16.935970 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:16.935981 | orchestrator | 2025-09-19 07:33:16.935992 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 07:33:16.936002 | orchestrator | Friday 19 September 2025 07:33:13 +0000 (0:00:01.291) 0:00:11.632 ****** 2025-09-19 07:33:16.936013 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:16.936023 | orchestrator | 2025-09-19 07:33:16.936034 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 07:33:16.936053 | orchestrator | Friday 19 September 2025 07:33:14 +0000 (0:00:00.250) 0:00:11.883 ****** 2025-09-19 07:33:16.936063 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:16.936074 | orchestrator | 2025-09-19 07:33:16.936085 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:16.936095 | orchestrator | Friday 19 September 2025 07:33:14 +0000 (0:00:00.258) 0:00:12.141 ****** 2025-09-19 07:33:16.936106 | orchestrator | 2025-09-19 07:33:16.936116 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:16.936127 | orchestrator | Friday 19 September 2025 07:33:14 +0000 (0:00:00.069) 0:00:12.211 ****** 2025-09-19 07:33:16.936138 | orchestrator | 2025-09-19 07:33:16.936148 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:16.936159 | orchestrator | Friday 19 September 2025 07:33:14 +0000 (0:00:00.079) 0:00:12.291 ****** 2025-09-19 07:33:16.936169 | orchestrator | 2025-09-19 07:33:16.936180 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 07:33:16.936191 | orchestrator | Friday 19 September 2025 07:33:14 +0000 (0:00:00.070) 0:00:12.362 ****** 2025-09-19 07:33:16.936201 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:16.936212 | orchestrator | 2025-09-19 07:33:16.936222 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 07:33:16.936233 | orchestrator | Friday 19 September 2025 07:33:16 +0000 (0:00:01.812) 0:00:14.174 ****** 2025-09-19 07:33:16.936243 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-19 07:33:16.936254 | orchestrator |  "msg": [ 2025-09-19 07:33:16.936265 | orchestrator |  "Validator run completed.", 2025-09-19 07:33:16.936276 | orchestrator |  "You can find the report file here:", 2025-09-19 07:33:16.936287 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-09-19T07:33:03+00:00-report.json", 2025-09-19 07:33:16.936298 | orchestrator |  "on the following host:", 2025-09-19 07:33:16.936309 | orchestrator |  "testbed-manager" 2025-09-19 07:33:16.936319 | orchestrator |  ] 2025-09-19 07:33:16.936330 | orchestrator | } 2025-09-19 07:33:16.936341 | orchestrator | 2025-09-19 07:33:16.936351 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:33:16.936363 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 07:33:16.936374 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:33:16.936400 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:33:17.294270 | orchestrator | 2025-09-19 07:33:17.295220 | orchestrator | 2025-09-19 07:33:17.295255 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:33:17.295270 | orchestrator | Friday 19 September 2025 07:33:16 +0000 (0:00:00.519) 0:00:14.693 ****** 2025-09-19 07:33:17.295281 | orchestrator | =============================================================================== 2025-09-19 07:33:17.295292 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.00s 2025-09-19 07:33:17.295303 | orchestrator | Write report file ------------------------------------------------------- 1.81s 2025-09-19 07:33:17.295315 | orchestrator | Aggregate test results step one ----------------------------------------- 1.29s 2025-09-19 07:33:17.295326 | orchestrator | Get container info ------------------------------------------------------ 0.92s 2025-09-19 07:33:17.295337 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2025-09-19 07:33:17.295347 | orchestrator | Aggregate test results step one ----------------------------------------- 0.66s 2025-09-19 07:33:17.295358 | orchestrator | Get timestamp for report file ------------------------------------------- 0.55s 2025-09-19 07:33:17.295393 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.53s 2025-09-19 07:33:17.295420 | orchestrator | Print report file information ------------------------------------------- 0.52s 2025-09-19 07:33:17.295431 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2025-09-19 07:33:17.295442 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2025-09-19 07:33:17.295453 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-09-19 07:33:17.295463 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2025-09-19 07:33:17.295474 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-09-19 07:33:17.295484 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2025-09-19 07:33:17.295495 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2025-09-19 07:33:17.295506 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.27s 2025-09-19 07:33:17.295516 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-09-19 07:33:17.295527 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-09-19 07:33:17.295538 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-09-19 07:33:17.591406 | orchestrator | + osism validate ceph-osds 2025-09-19 07:33:37.613117 | orchestrator | 2025-09-19 07:33:37.613221 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-09-19 07:33:37.613236 | orchestrator | 2025-09-19 07:33:37.613248 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 07:33:37.613260 | orchestrator | Friday 19 September 2025 07:33:33 +0000 (0:00:00.429) 0:00:00.429 ****** 2025-09-19 07:33:37.613272 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:37.613283 | orchestrator | 2025-09-19 07:33:37.613294 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 07:33:37.613305 | orchestrator | Friday 19 September 2025 07:33:34 +0000 (0:00:00.639) 0:00:01.068 ****** 2025-09-19 07:33:37.613315 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:37.613326 | orchestrator | 2025-09-19 07:33:37.613337 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 07:33:37.613349 | orchestrator | Friday 19 September 2025 07:33:34 +0000 (0:00:00.229) 0:00:01.298 ****** 2025-09-19 07:33:37.613360 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:37.613371 | orchestrator | 2025-09-19 07:33:37.613382 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 07:33:37.613393 | orchestrator | Friday 19 September 2025 07:33:35 +0000 (0:00:00.811) 0:00:02.109 ****** 2025-09-19 07:33:37.613404 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:37.613416 | orchestrator | 2025-09-19 07:33:37.613427 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-19 07:33:37.613438 | orchestrator | Friday 19 September 2025 07:33:35 +0000 (0:00:00.116) 0:00:02.226 ****** 2025-09-19 07:33:37.613449 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:37.613460 | orchestrator | 2025-09-19 07:33:37.613471 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-19 07:33:37.613482 | orchestrator | Friday 19 September 2025 07:33:35 +0000 (0:00:00.113) 0:00:02.339 ****** 2025-09-19 07:33:37.613493 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:37.613503 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:33:37.613514 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:33:37.613525 | orchestrator | 2025-09-19 07:33:37.613536 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-19 07:33:37.613547 | orchestrator | Friday 19 September 2025 07:33:36 +0000 (0:00:00.264) 0:00:02.604 ****** 2025-09-19 07:33:37.613558 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:37.613593 | orchestrator | 2025-09-19 07:33:37.613605 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-19 07:33:37.613616 | orchestrator | Friday 19 September 2025 07:33:36 +0000 (0:00:00.138) 0:00:02.743 ****** 2025-09-19 07:33:37.613626 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:37.613637 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:37.613648 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:37.613660 | orchestrator | 2025-09-19 07:33:37.613702 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-09-19 07:33:37.613715 | orchestrator | Friday 19 September 2025 07:33:36 +0000 (0:00:00.309) 0:00:03.053 ****** 2025-09-19 07:33:37.613728 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:37.613740 | orchestrator | 2025-09-19 07:33:37.613753 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 07:33:37.613765 | orchestrator | Friday 19 September 2025 07:33:37 +0000 (0:00:00.476) 0:00:03.529 ****** 2025-09-19 07:33:37.613777 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:37.613790 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:37.613802 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:37.613814 | orchestrator | 2025-09-19 07:33:37.613825 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-09-19 07:33:37.613836 | orchestrator | Friday 19 September 2025 07:33:37 +0000 (0:00:00.383) 0:00:03.912 ****** 2025-09-19 07:33:37.613849 | orchestrator | skipping: [testbed-node-3] => (item={'id': '20cbb3c55553566e342eb68286c1e2b3632f18afc4acebea25d3db5fd5290243', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 07:33:37.613862 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e252574755554cfccb3bb696e54ac2aa697b5fa52fdac4b2d6ba02227b9a4e64', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 07:33:37.613888 | orchestrator | skipping: [testbed-node-3] => (item={'id': '33900812f71f92de5e7d203d42040f9070bbc4424595d02f60e8a23e038deb61', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 07:33:37.613901 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4617dffbb790c257134e4b18c2a14310c66bdc2681567e5e65f050fcb6603e2d', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 07:33:37.613914 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7433eadb53edffe272fbd3d79d765a6fd6b238b39ad403de9fbe6c36761c9835', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 07:33:37.613943 | orchestrator | skipping: [testbed-node-3] => (item={'id': '99a548a2fc28da634b1b39e523f32e363ba42cd6df5b174a92c5452f5e03af78', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 07:33:37.613955 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6c4472a49e535085f9e4a06455f6560519d1052eafaa0fab33353f5fb7a726b5', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 07:33:37.613978 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a2dcfcd85eccf3adf9d5dcc866ac84b2927b2465b3ef4386436de799ff475a03', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 07:33:37.613989 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2bf891f8374252494aed72c0f766541ce22b747ec7ff6b143946e830873fe0ab', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 07:33:37.614008 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c68309333d11d24dd363c84d22230ca1b1a95e28998bdf1dacef742b5fd121f7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 07:33:37.614096 | orchestrator | skipping: [testbed-node-3] => (item={'id': '972d238e4b962a8d7845c4335e9e9b20a0b92cf7a9705aaaf6e9a1da5a349e4d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 07:33:37.614113 | orchestrator | skipping: [testbed-node-3] => (item={'id': '16eff4947c95d1fb5cfa0c954eb5848f512dcce1100c21d2a6bd2e515414e63e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 07:33:37.614126 | orchestrator | ok: [testbed-node-3] => (item={'id': '303d28dc029fa916370b5a862cb522ac1bd0b4f7e80b83b8ab351f4d389656bf', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 07:33:37.614139 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f48cd2d1dd4f5d0095aa169fac1a18b469f261df9a0499f09426bc1375dd2491', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 07:33:37.614150 | orchestrator | skipping: [testbed-node-3] => (item={'id': '13a596339b7a90e78c1a94cb975aa3525b47722524c1417ca6c0dc01f23bd988', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-09-19 07:33:37.614161 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0bc2ed99ffa982c8bf5b525b4fb44c38d19fe998b36a59c4e7e0eacc32728f9d', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 07:33:37.614173 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ecfd028b2d5d11eaed26ac70610dddbdda6759624c64433f9c39cca4ccdf592e', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 07:33:37.614185 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6cfb6e5cf7174844eaaf83351aebf163f59a880f67ca52b29efc382934cde33e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 07:33:37.614196 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'abcf28f1b3cca0330e2cbf3d73ffe838fa75e8db57a27133f80f8e7667aff956', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 07:33:37.614207 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8b9f47a7346c335c5ccda00a00575147d5450099d5486d0b90c89c3319e896b1', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 07:33:37.614227 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ffe7960a3089f85fe0f75669d525662583eb6bb208eb1704bb88a40ba420f7f9', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 07:33:37.877465 | orchestrator | skipping: [testbed-node-3] => (item={'id': '684e84f1eabb5116bac1bb9639fdede907c16a02c939bb3731a8a69fa57d18d3', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 07:33:37.877559 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5022c5988152a522f997cfea90248a90ed70cc3c473caf68df4b3b8ddb007008', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 07:33:37.877597 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0eb4c3a1f5e966e2861d15329d228a64d076c226f08799cb15a3615257f8325b', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 07:33:37.877610 | orchestrator | skipping: [testbed-node-4] => (item={'id': '90c1e98119e3c60169e03182a9c0467a39542f321f198f514d8168415537a7e7', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 07:33:37.877622 | orchestrator | skipping: [testbed-node-4] => (item={'id': '38c7969c01000b9aa95d20edde5fd529a55a35f3f3ef9f3834ce2767df6b58b6', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 07:33:37.877633 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c0dab744cb25b3826ad48c1748271e419a0f64e08781bb1a62ad6be9d402a2d5', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 07:33:37.877645 | orchestrator | skipping: [testbed-node-4] => (item={'id': '30fdbbd4cde3b0981f3035fb989e6957689cd2cd7695b5d0fd06022f8688bd98', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 07:33:37.877657 | orchestrator | skipping: [testbed-node-4] => (item={'id': '98e79ca631fc75eb4429120c22b2fb22864ad6efd807928bd4517082e5e13b88', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 07:33:37.877702 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4b49c58aa60197604fcf69796cbc67090b73c52cef0a4751e291ddbb6ee4cb55', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 07:33:37.877731 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a4fb6e7ab164381b3dcae1c3d245491d4dc1c42f74016de4a409effef0fe1007', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 07:33:37.877748 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e464939375c8f987b345c309b7415dc7a00189f77943f1e0bb7dd96a9b4871a6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 07:33:37.877762 | orchestrator | ok: [testbed-node-4] => (item={'id': 'fc0ce292bad410a8b6ff32951f923b9229df1199dbde71f94b0bba8efa9af639', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 07:33:37.877774 | orchestrator | ok: [testbed-node-4] => (item={'id': 'bd4f52afe955ef5eb92703c53561f79be0c8edad5381b469126f49e5565398f7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 07:33:37.877785 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9e79160e0e6a4af3b6215a8a978862de18542d34e72713a016163e6d434ff3a5', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-09-19 07:33:37.877813 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c4a9d65a15742a8e876473256b3e01b9a4b51a276a6b7cd5aaf97c873c06d731', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 07:33:37.877833 | orchestrator | skipping: [testbed-node-4] => (item={'id': '48d22f8384e34751b1d7cd4b70bfe656c3747a12a57c825d988474e5d2db1dc6', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 07:33:37.877844 | orchestrator | skipping: [testbed-node-4] => (item={'id': '791288c255725b9ffface85194146e421c72a49fe34d47396d67fa39b82ae6cc', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 07:33:37.877856 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5e9c31bdcae74c16c4b527393e936e11f7ae2abf39b5b4a3bee7c2af2839f392', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 07:33:37.877867 | orchestrator | skipping: [testbed-node-4] => (item={'id': '82a84a2c0fba3b0d4f0270fd737b15564dfe97d789da921ae5fb0e319361694f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 07:33:37.877878 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9489573a090611aece815738a5042e8e4d456cda045643946269a80d955e3dba', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 07:33:37.877890 | orchestrator | skipping: [testbed-node-5] => (item={'id': '305478f60bfbd5a4507b49cef58d9917e7abe282a3cf411d41e75755fdf2e17c', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 07:33:37.877901 | orchestrator | skipping: [testbed-node-5] => (item={'id': '362ec772381d134f640c6eae12293f24ca6207664e8a93c152a8027619634ae3', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 07:33:37.877912 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2e7e79c93b41391b68a12a93b99a52cbbf417cb0df4d41c38b40e9702d8c9716', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 07:33:37.877923 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e1d73c3970bc59aeb6bfeff4c2bef38665ac413c5a0d6b07e6fdde48fb4531d7', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 07:33:37.877935 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9a68116e6809958260ed22092ed1a07cc4fbb834665403a5eded5450aa832bc6', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 07:33:37.877952 | orchestrator | skipping: [testbed-node-5] => (item={'id': '843a9e402a99dd521f093bbe46ffba36898c89a23f298b44a89998a0b2663654', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 07:33:37.877964 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5e812b3e4a09f8acdefbc624aa03874d228662c9429b46882f905a6381dca1e4', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 07:33:37.877975 | orchestrator | skipping: [testbed-node-5] => (item={'id': '25d73a773c719859be23c2f454fc18e0a1ec10e7913a4e8ddb3bc2928000c373', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 07:33:37.877993 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fbd189695950f69da938ba393048680d41a8e52f7d6b37b727547ce7ff94d16d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 07:33:37.878011 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f553288a1e42f1cb7b7a111e4de9ba65c442e9b251139cf5e911e94e4c2fe7b8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 07:33:44.720820 | orchestrator | skipping: [testbed-node-5] => (item={'id': '546ca23fda20ae6a16717eaa959d10394df6a50a33d40a012aeee6545e43157b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 07:33:44.720903 | orchestrator | ok: [testbed-node-5] => (item={'id': 'd8f181adfe00fa021725ae913c041ee42632ac2605dbbe7c2e1c52211b39b837', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 07:33:44.720912 | orchestrator | ok: [testbed-node-5] => (item={'id': 'd3a2732ffabb519e1fcd0aac6be311dfe7673e4ac909aec614619a652c173f08', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 07:33:44.720918 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2ce72d7f5e964b39979edb04ba142f8dcc639186a1d312687544180e7e67467', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-09-19 07:33:44.720925 | orchestrator | skipping: [testbed-node-5] => (item={'id': '97128e3e6f87a0d38c8251f08f562b0bc299ed455ce96ea2ea1e902cb50b0d28', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 07:33:44.720932 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd3a860b88793dd4b6746dfd427bf2c0617dafb47cd0f695e1bdebecf67ea31c8', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 07:33:44.720937 | orchestrator | skipping: [testbed-node-5] => (item={'id': '987a8571378c8321ca8bbb904463aa8f1da0c160b54f78243e958f0a217115cd', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 07:33:44.720942 | orchestrator | skipping: [testbed-node-5] => (item={'id': '88d48d6443feb3ef60a00721c41eeafece854cc74fd845288bb32d2d01fc9873', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 07:33:44.720947 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cb27bb61a0dbf9dd6d08e63ea130a8536eac0347cf0b04c14ba8bad6f66456b7', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 07:33:44.720952 | orchestrator | 2025-09-19 07:33:44.720958 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-09-19 07:33:44.720965 | orchestrator | Friday 19 September 2025 07:33:37 +0000 (0:00:00.472) 0:00:04.385 ****** 2025-09-19 07:33:44.720970 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:44.720975 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:44.720980 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:44.720985 | orchestrator | 2025-09-19 07:33:44.720990 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-09-19 07:33:44.720995 | orchestrator | Friday 19 September 2025 07:33:38 +0000 (0:00:00.262) 0:00:04.647 ****** 2025-09-19 07:33:44.721012 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:44.721034 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:33:44.721039 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:33:44.721043 | orchestrator | 2025-09-19 07:33:44.721048 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-09-19 07:33:44.721053 | orchestrator | Friday 19 September 2025 07:33:38 +0000 (0:00:00.250) 0:00:04.898 ****** 2025-09-19 07:33:44.721058 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:44.721063 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:44.721068 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:44.721072 | orchestrator | 2025-09-19 07:33:44.721077 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 07:33:44.721082 | orchestrator | Friday 19 September 2025 07:33:38 +0000 (0:00:00.371) 0:00:05.270 ****** 2025-09-19 07:33:44.721087 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:44.721092 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:44.721096 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:44.721101 | orchestrator | 2025-09-19 07:33:44.721106 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-09-19 07:33:44.721111 | orchestrator | Friday 19 September 2025 07:33:39 +0000 (0:00:00.263) 0:00:05.533 ****** 2025-09-19 07:33:44.721116 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-09-19 07:33:44.721122 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-09-19 07:33:44.721127 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:44.721132 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-09-19 07:33:44.721137 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-09-19 07:33:44.721152 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:33:44.721166 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-09-19 07:33:44.721171 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-09-19 07:33:44.721176 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:33:44.721187 | orchestrator | 2025-09-19 07:33:44.721192 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-09-19 07:33:44.721197 | orchestrator | Friday 19 September 2025 07:33:39 +0000 (0:00:00.284) 0:00:05.817 ****** 2025-09-19 07:33:44.721202 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:44.721207 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:44.721212 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:44.721217 | orchestrator | 2025-09-19 07:33:44.721221 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-19 07:33:44.721226 | orchestrator | Friday 19 September 2025 07:33:39 +0000 (0:00:00.277) 0:00:06.095 ****** 2025-09-19 07:33:44.721231 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:44.721236 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:33:44.721240 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:33:44.721245 | orchestrator | 2025-09-19 07:33:44.721250 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-19 07:33:44.721255 | orchestrator | Friday 19 September 2025 07:33:39 +0000 (0:00:00.377) 0:00:06.473 ****** 2025-09-19 07:33:44.721260 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:44.721264 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:33:44.721269 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:33:44.721274 | orchestrator | 2025-09-19 07:33:44.721279 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-09-19 07:33:44.721283 | orchestrator | Friday 19 September 2025 07:33:40 +0000 (0:00:00.277) 0:00:06.751 ****** 2025-09-19 07:33:44.721288 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:44.721293 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:44.721298 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:44.721303 | orchestrator | 2025-09-19 07:33:44.721312 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 07:33:44.721317 | orchestrator | Friday 19 September 2025 07:33:40 +0000 (0:00:00.261) 0:00:07.012 ****** 2025-09-19 07:33:44.721322 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:44.721327 | orchestrator | 2025-09-19 07:33:44.721331 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 07:33:44.721336 | orchestrator | Friday 19 September 2025 07:33:40 +0000 (0:00:00.208) 0:00:07.220 ****** 2025-09-19 07:33:44.721341 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:44.721346 | orchestrator | 2025-09-19 07:33:44.721351 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 07:33:44.721356 | orchestrator | Friday 19 September 2025 07:33:40 +0000 (0:00:00.220) 0:00:07.441 ****** 2025-09-19 07:33:44.721361 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:44.721366 | orchestrator | 2025-09-19 07:33:44.721370 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:44.721375 | orchestrator | Friday 19 September 2025 07:33:41 +0000 (0:00:00.232) 0:00:07.674 ****** 2025-09-19 07:33:44.721380 | orchestrator | 2025-09-19 07:33:44.721386 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:44.721392 | orchestrator | Friday 19 September 2025 07:33:41 +0000 (0:00:00.066) 0:00:07.741 ****** 2025-09-19 07:33:44.721397 | orchestrator | 2025-09-19 07:33:44.721403 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:44.721408 | orchestrator | Friday 19 September 2025 07:33:41 +0000 (0:00:00.062) 0:00:07.803 ****** 2025-09-19 07:33:44.721414 | orchestrator | 2025-09-19 07:33:44.721419 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 07:33:44.721425 | orchestrator | Friday 19 September 2025 07:33:41 +0000 (0:00:00.233) 0:00:08.036 ****** 2025-09-19 07:33:44.721431 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:44.721436 | orchestrator | 2025-09-19 07:33:44.721442 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-09-19 07:33:44.721447 | orchestrator | Friday 19 September 2025 07:33:41 +0000 (0:00:00.260) 0:00:08.297 ****** 2025-09-19 07:33:44.721453 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:44.721459 | orchestrator | 2025-09-19 07:33:44.721465 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 07:33:44.721470 | orchestrator | Friday 19 September 2025 07:33:42 +0000 (0:00:00.267) 0:00:08.565 ****** 2025-09-19 07:33:44.721476 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:44.721481 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:44.721487 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:44.721492 | orchestrator | 2025-09-19 07:33:44.721498 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-09-19 07:33:44.721504 | orchestrator | Friday 19 September 2025 07:33:42 +0000 (0:00:00.294) 0:00:08.860 ****** 2025-09-19 07:33:44.721510 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:44.721515 | orchestrator | 2025-09-19 07:33:44.721520 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-09-19 07:33:44.721526 | orchestrator | Friday 19 September 2025 07:33:42 +0000 (0:00:00.231) 0:00:09.091 ****** 2025-09-19 07:33:44.721532 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:33:44.721537 | orchestrator | 2025-09-19 07:33:44.721543 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-09-19 07:33:44.721548 | orchestrator | Friday 19 September 2025 07:33:44 +0000 (0:00:01.600) 0:00:10.692 ****** 2025-09-19 07:33:44.721554 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:44.721559 | orchestrator | 2025-09-19 07:33:44.721565 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-09-19 07:33:44.721570 | orchestrator | Friday 19 September 2025 07:33:44 +0000 (0:00:00.139) 0:00:10.831 ****** 2025-09-19 07:33:44.721576 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:44.721582 | orchestrator | 2025-09-19 07:33:44.721588 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-09-19 07:33:44.721597 | orchestrator | Friday 19 September 2025 07:33:44 +0000 (0:00:00.285) 0:00:11.117 ****** 2025-09-19 07:33:44.721605 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:57.485826 | orchestrator | 2025-09-19 07:33:57.485939 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-09-19 07:33:57.485956 | orchestrator | Friday 19 September 2025 07:33:44 +0000 (0:00:00.118) 0:00:11.236 ****** 2025-09-19 07:33:57.485969 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:57.485981 | orchestrator | 2025-09-19 07:33:57.485992 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 07:33:57.486004 | orchestrator | Friday 19 September 2025 07:33:44 +0000 (0:00:00.147) 0:00:11.383 ****** 2025-09-19 07:33:57.486073 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:57.486086 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:57.486097 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:57.486108 | orchestrator | 2025-09-19 07:33:57.486119 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-09-19 07:33:57.486130 | orchestrator | Friday 19 September 2025 07:33:45 +0000 (0:00:00.541) 0:00:11.924 ****** 2025-09-19 07:33:57.486142 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:33:57.486154 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:33:57.486212 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:33:57.486225 | orchestrator | 2025-09-19 07:33:57.486236 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-09-19 07:33:57.486247 | orchestrator | Friday 19 September 2025 07:33:47 +0000 (0:00:02.307) 0:00:14.232 ****** 2025-09-19 07:33:57.486258 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:57.486269 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:57.486280 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:57.486291 | orchestrator | 2025-09-19 07:33:57.486302 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-09-19 07:33:57.486313 | orchestrator | Friday 19 September 2025 07:33:48 +0000 (0:00:00.305) 0:00:14.537 ****** 2025-09-19 07:33:57.486324 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:57.486334 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:57.486345 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:57.486356 | orchestrator | 2025-09-19 07:33:57.486367 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-09-19 07:33:57.486377 | orchestrator | Friday 19 September 2025 07:33:48 +0000 (0:00:00.482) 0:00:15.020 ****** 2025-09-19 07:33:57.486388 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:57.486399 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:33:57.486410 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:33:57.486421 | orchestrator | 2025-09-19 07:33:57.486431 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-09-19 07:33:57.486442 | orchestrator | Friday 19 September 2025 07:33:49 +0000 (0:00:00.526) 0:00:15.547 ****** 2025-09-19 07:33:57.486453 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:57.486465 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:57.486475 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:57.486486 | orchestrator | 2025-09-19 07:33:57.486497 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-09-19 07:33:57.486508 | orchestrator | Friday 19 September 2025 07:33:49 +0000 (0:00:00.323) 0:00:15.871 ****** 2025-09-19 07:33:57.486519 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:57.486529 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:33:57.486540 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:33:57.486551 | orchestrator | 2025-09-19 07:33:57.486562 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-09-19 07:33:57.486573 | orchestrator | Friday 19 September 2025 07:33:49 +0000 (0:00:00.285) 0:00:16.156 ****** 2025-09-19 07:33:57.486584 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:57.486594 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:33:57.486605 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:33:57.486639 | orchestrator | 2025-09-19 07:33:57.486650 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 07:33:57.486661 | orchestrator | Friday 19 September 2025 07:33:49 +0000 (0:00:00.314) 0:00:16.471 ****** 2025-09-19 07:33:57.486672 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:57.486715 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:57.486726 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:57.486736 | orchestrator | 2025-09-19 07:33:57.486747 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-09-19 07:33:57.486758 | orchestrator | Friday 19 September 2025 07:33:50 +0000 (0:00:00.734) 0:00:17.206 ****** 2025-09-19 07:33:57.486769 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:57.486784 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:57.486795 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:57.486806 | orchestrator | 2025-09-19 07:33:57.486817 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-09-19 07:33:57.486828 | orchestrator | Friday 19 September 2025 07:33:51 +0000 (0:00:00.498) 0:00:17.704 ****** 2025-09-19 07:33:57.486838 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:57.486849 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:57.486859 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:57.486870 | orchestrator | 2025-09-19 07:33:57.486881 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-09-19 07:33:57.486892 | orchestrator | Friday 19 September 2025 07:33:51 +0000 (0:00:00.305) 0:00:18.009 ****** 2025-09-19 07:33:57.486902 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:57.486913 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:33:57.486924 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:33:57.486935 | orchestrator | 2025-09-19 07:33:57.486945 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-09-19 07:33:57.486956 | orchestrator | Friday 19 September 2025 07:33:51 +0000 (0:00:00.290) 0:00:18.300 ****** 2025-09-19 07:33:57.486967 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:33:57.486977 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:33:57.486988 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:33:57.486999 | orchestrator | 2025-09-19 07:33:57.487009 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 07:33:57.487020 | orchestrator | Friday 19 September 2025 07:33:52 +0000 (0:00:00.514) 0:00:18.814 ****** 2025-09-19 07:33:57.487031 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:57.487042 | orchestrator | 2025-09-19 07:33:57.487052 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 07:33:57.487063 | orchestrator | Friday 19 September 2025 07:33:52 +0000 (0:00:00.257) 0:00:19.072 ****** 2025-09-19 07:33:57.487074 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:33:57.487085 | orchestrator | 2025-09-19 07:33:57.487113 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 07:33:57.487125 | orchestrator | Friday 19 September 2025 07:33:52 +0000 (0:00:00.258) 0:00:19.330 ****** 2025-09-19 07:33:57.487136 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:57.487146 | orchestrator | 2025-09-19 07:33:57.487157 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 07:33:57.487167 | orchestrator | Friday 19 September 2025 07:33:54 +0000 (0:00:01.630) 0:00:20.961 ****** 2025-09-19 07:33:57.487178 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:57.487189 | orchestrator | 2025-09-19 07:33:57.487199 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 07:33:57.487210 | orchestrator | Friday 19 September 2025 07:33:54 +0000 (0:00:00.265) 0:00:21.227 ****** 2025-09-19 07:33:57.487220 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:57.487231 | orchestrator | 2025-09-19 07:33:57.487242 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:57.487252 | orchestrator | Friday 19 September 2025 07:33:54 +0000 (0:00:00.249) 0:00:21.476 ****** 2025-09-19 07:33:57.487272 | orchestrator | 2025-09-19 07:33:57.487283 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:57.487293 | orchestrator | Friday 19 September 2025 07:33:55 +0000 (0:00:00.066) 0:00:21.542 ****** 2025-09-19 07:33:57.487304 | orchestrator | 2025-09-19 07:33:57.487314 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 07:33:57.487325 | orchestrator | Friday 19 September 2025 07:33:55 +0000 (0:00:00.066) 0:00:21.609 ****** 2025-09-19 07:33:57.487335 | orchestrator | 2025-09-19 07:33:57.487346 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 07:33:57.487357 | orchestrator | Friday 19 September 2025 07:33:55 +0000 (0:00:00.069) 0:00:21.679 ****** 2025-09-19 07:33:57.487367 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 07:33:57.487378 | orchestrator | 2025-09-19 07:33:57.487388 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 07:33:57.487399 | orchestrator | Friday 19 September 2025 07:33:56 +0000 (0:00:01.535) 0:00:23.214 ****** 2025-09-19 07:33:57.487410 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-09-19 07:33:57.487420 | orchestrator |  "msg": [ 2025-09-19 07:33:57.487431 | orchestrator |  "Validator run completed.", 2025-09-19 07:33:57.487442 | orchestrator |  "You can find the report file here:", 2025-09-19 07:33:57.487453 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-09-19T07:33:34+00:00-report.json", 2025-09-19 07:33:57.487465 | orchestrator |  "on the following host:", 2025-09-19 07:33:57.487476 | orchestrator |  "testbed-manager" 2025-09-19 07:33:57.487487 | orchestrator |  ] 2025-09-19 07:33:57.487498 | orchestrator | } 2025-09-19 07:33:57.487509 | orchestrator | 2025-09-19 07:33:57.487520 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:33:57.487532 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-09-19 07:33:57.487544 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 07:33:57.487554 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 07:33:57.487565 | orchestrator | 2025-09-19 07:33:57.487576 | orchestrator | 2025-09-19 07:33:57.487586 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:33:57.487597 | orchestrator | Friday 19 September 2025 07:33:57 +0000 (0:00:00.764) 0:00:23.979 ****** 2025-09-19 07:33:57.487608 | orchestrator | =============================================================================== 2025-09-19 07:33:57.487623 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.31s 2025-09-19 07:33:57.487634 | orchestrator | Aggregate test results step one ----------------------------------------- 1.63s 2025-09-19 07:33:57.487645 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.60s 2025-09-19 07:33:57.487656 | orchestrator | Write report file ------------------------------------------------------- 1.54s 2025-09-19 07:33:57.487666 | orchestrator | Create report output directory ------------------------------------------ 0.81s 2025-09-19 07:33:57.487677 | orchestrator | Print report file information ------------------------------------------- 0.76s 2025-09-19 07:33:57.487706 | orchestrator | Prepare test data ------------------------------------------------------- 0.73s 2025-09-19 07:33:57.487717 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-09-19 07:33:57.487728 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2025-09-19 07:33:57.487738 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.53s 2025-09-19 07:33:57.487749 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.51s 2025-09-19 07:33:57.487767 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.50s 2025-09-19 07:33:57.487777 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2025-09-19 07:33:57.487788 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.48s 2025-09-19 07:33:57.487799 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.47s 2025-09-19 07:33:57.487810 | orchestrator | Prepare test data ------------------------------------------------------- 0.38s 2025-09-19 07:33:57.487827 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.38s 2025-09-19 07:33:57.750207 | orchestrator | Set test result to passed if count matches ------------------------------ 0.37s 2025-09-19 07:33:57.750303 | orchestrator | Flush handlers ---------------------------------------------------------- 0.36s 2025-09-19 07:33:57.750316 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.32s 2025-09-19 07:33:58.017182 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-09-19 07:33:58.023625 | orchestrator | + set -e 2025-09-19 07:33:58.023793 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 07:33:58.023823 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 07:33:58.023843 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 07:33:58.023862 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 07:33:58.024187 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 07:33:58.024208 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 07:33:58.024220 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 07:33:58.024231 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 07:33:58.024243 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 07:33:58.024254 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 07:33:58.024264 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 07:33:58.024275 | orchestrator | ++ export ARA=false 2025-09-19 07:33:58.024286 | orchestrator | ++ ARA=false 2025-09-19 07:33:58.024297 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 07:33:58.024308 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 07:33:58.024319 | orchestrator | ++ export TEMPEST=false 2025-09-19 07:33:58.024330 | orchestrator | ++ TEMPEST=false 2025-09-19 07:33:58.024340 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 07:33:58.024351 | orchestrator | ++ IS_ZUUL=true 2025-09-19 07:33:58.024362 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 07:33:58.024373 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.189 2025-09-19 07:33:58.024384 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 07:33:58.024394 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 07:33:58.024405 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 07:33:58.024416 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 07:33:58.024427 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 07:33:58.024437 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 07:33:58.024448 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 07:33:58.024459 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 07:33:58.024482 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 07:33:58.024493 | orchestrator | + source /etc/os-release 2025-09-19 07:33:58.024504 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-09-19 07:33:58.024515 | orchestrator | ++ NAME=Ubuntu 2025-09-19 07:33:58.024526 | orchestrator | ++ VERSION_ID=24.04 2025-09-19 07:33:58.024537 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-09-19 07:33:58.024548 | orchestrator | ++ VERSION_CODENAME=noble 2025-09-19 07:33:58.024559 | orchestrator | ++ ID=ubuntu 2025-09-19 07:33:58.024569 | orchestrator | ++ ID_LIKE=debian 2025-09-19 07:33:58.024580 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-09-19 07:33:58.024591 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-09-19 07:33:58.024602 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-09-19 07:33:58.024613 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-09-19 07:33:58.024625 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-09-19 07:33:58.024636 | orchestrator | ++ LOGO=ubuntu-logo 2025-09-19 07:33:58.024646 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-09-19 07:33:58.024658 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-09-19 07:33:58.024671 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-19 07:33:58.055504 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-19 07:34:19.135641 | orchestrator | 2025-09-19 07:34:19.135826 | orchestrator | # Status of Elasticsearch 2025-09-19 07:34:19.135846 | orchestrator | 2025-09-19 07:34:19.135859 | orchestrator | + pushd /opt/configuration/contrib 2025-09-19 07:34:19.135872 | orchestrator | + echo 2025-09-19 07:34:19.135883 | orchestrator | + echo '# Status of Elasticsearch' 2025-09-19 07:34:19.135894 | orchestrator | + echo 2025-09-19 07:34:19.135906 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-09-19 07:34:19.333609 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-09-19 07:34:19.333753 | orchestrator | 2025-09-19 07:34:19.333770 | orchestrator | # Status of MariaDB 2025-09-19 07:34:19.333783 | orchestrator | 2025-09-19 07:34:19.333794 | orchestrator | + echo 2025-09-19 07:34:19.333806 | orchestrator | + echo '# Status of MariaDB' 2025-09-19 07:34:19.333817 | orchestrator | + echo 2025-09-19 07:34:19.333828 | orchestrator | + MARIADB_USER=root_shard_0 2025-09-19 07:34:19.333840 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-09-19 07:34:19.399529 | orchestrator | Reading package lists... 2025-09-19 07:34:19.633193 | orchestrator | Building dependency tree... 2025-09-19 07:34:19.633372 | orchestrator | Reading state information... 2025-09-19 07:34:19.958692 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-09-19 07:34:19.958862 | orchestrator | bc set to manually installed. 2025-09-19 07:34:19.958883 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2025-09-19 07:34:20.659855 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-09-19 07:34:20.660631 | orchestrator | 2025-09-19 07:34:20.660663 | orchestrator | # Status of Prometheus 2025-09-19 07:34:20.660677 | orchestrator | 2025-09-19 07:34:20.660691 | orchestrator | + echo 2025-09-19 07:34:20.660746 | orchestrator | + echo '# Status of Prometheus' 2025-09-19 07:34:20.660759 | orchestrator | + echo 2025-09-19 07:34:20.660773 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-09-19 07:34:20.723521 | orchestrator | Unauthorized 2025-09-19 07:34:20.727040 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-09-19 07:34:20.807012 | orchestrator | Unauthorized 2025-09-19 07:34:20.810142 | orchestrator | 2025-09-19 07:34:20.810179 | orchestrator | # Status of RabbitMQ 2025-09-19 07:34:20.810196 | orchestrator | 2025-09-19 07:34:20.810209 | orchestrator | + echo 2025-09-19 07:34:20.810224 | orchestrator | + echo '# Status of RabbitMQ' 2025-09-19 07:34:20.810239 | orchestrator | + echo 2025-09-19 07:34:20.810255 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-09-19 07:34:21.337117 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-09-19 07:34:21.345447 | orchestrator | 2025-09-19 07:34:21.345505 | orchestrator | # Status of Redis 2025-09-19 07:34:21.345518 | orchestrator | 2025-09-19 07:34:21.345530 | orchestrator | + echo 2025-09-19 07:34:21.345542 | orchestrator | + echo '# Status of Redis' 2025-09-19 07:34:21.345553 | orchestrator | + echo 2025-09-19 07:34:21.345566 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-09-19 07:34:21.349785 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001625s;;;0.000000;10.000000 2025-09-19 07:34:21.350460 | orchestrator | + popd 2025-09-19 07:34:21.350957 | orchestrator | 2025-09-19 07:34:21.350988 | orchestrator | # Create backup of MariaDB database 2025-09-19 07:34:21.351001 | orchestrator | 2025-09-19 07:34:21.351012 | orchestrator | + echo 2025-09-19 07:34:21.351024 | orchestrator | + echo '# Create backup of MariaDB database' 2025-09-19 07:34:21.351035 | orchestrator | + echo 2025-09-19 07:34:21.351046 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-09-19 07:34:23.320456 | orchestrator | 2025-09-19 07:34:23 | INFO  | Task 955d6567-4123-4237-add9-49372b902ea9 (mariadb_backup) was prepared for execution. 2025-09-19 07:34:23.320551 | orchestrator | 2025-09-19 07:34:23 | INFO  | It takes a moment until task 955d6567-4123-4237-add9-49372b902ea9 (mariadb_backup) has been started and output is visible here. 2025-09-19 07:35:27.279623 | orchestrator | 2025-09-19 07:35:27.279734 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:35:27.279751 | orchestrator | 2025-09-19 07:35:27.279817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:35:27.279830 | orchestrator | Friday 19 September 2025 07:34:27 +0000 (0:00:00.177) 0:00:00.177 ****** 2025-09-19 07:35:27.279842 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:35:27.279853 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:35:27.279864 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:35:27.279875 | orchestrator | 2025-09-19 07:35:27.279886 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:35:27.279898 | orchestrator | Friday 19 September 2025 07:34:27 +0000 (0:00:00.302) 0:00:00.480 ****** 2025-09-19 07:35:27.279908 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 07:35:27.279920 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 07:35:27.279930 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 07:35:27.279941 | orchestrator | 2025-09-19 07:35:27.279952 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 07:35:27.279963 | orchestrator | 2025-09-19 07:35:27.279974 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 07:35:27.279985 | orchestrator | Friday 19 September 2025 07:34:28 +0000 (0:00:00.543) 0:00:01.024 ****** 2025-09-19 07:35:27.279995 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:35:27.280006 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 07:35:27.280017 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 07:35:27.280028 | orchestrator | 2025-09-19 07:35:27.280040 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 07:35:27.280123 | orchestrator | Friday 19 September 2025 07:34:28 +0000 (0:00:00.385) 0:00:01.409 ****** 2025-09-19 07:35:27.280136 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:35:27.280149 | orchestrator | 2025-09-19 07:35:27.280160 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-09-19 07:35:27.280173 | orchestrator | Friday 19 September 2025 07:34:29 +0000 (0:00:00.526) 0:00:01.936 ****** 2025-09-19 07:35:27.280187 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:35:27.280199 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:35:27.280212 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:35:27.280225 | orchestrator | 2025-09-19 07:35:27.280238 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-09-19 07:35:27.280250 | orchestrator | Friday 19 September 2025 07:34:32 +0000 (0:00:03.096) 0:00:05.032 ****** 2025-09-19 07:35:27.280263 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 07:35:27.280275 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-09-19 07:35:27.280289 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 07:35:27.280301 | orchestrator | mariadb_bootstrap_restart 2025-09-19 07:35:27.280314 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:35:27.280326 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:35:27.280339 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:35:27.280352 | orchestrator | 2025-09-19 07:35:27.280364 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 07:35:27.280376 | orchestrator | skipping: no hosts matched 2025-09-19 07:35:27.280389 | orchestrator | 2025-09-19 07:35:27.280402 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 07:35:27.280415 | orchestrator | skipping: no hosts matched 2025-09-19 07:35:27.280428 | orchestrator | 2025-09-19 07:35:27.280440 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 07:35:27.280453 | orchestrator | skipping: no hosts matched 2025-09-19 07:35:27.280487 | orchestrator | 2025-09-19 07:35:27.280500 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 07:35:27.280513 | orchestrator | 2025-09-19 07:35:27.280526 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 07:35:27.280537 | orchestrator | Friday 19 September 2025 07:35:26 +0000 (0:00:54.086) 0:00:59.118 ****** 2025-09-19 07:35:27.280548 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:35:27.280558 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:35:27.280569 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:35:27.280580 | orchestrator | 2025-09-19 07:35:27.280591 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 07:35:27.280602 | orchestrator | Friday 19 September 2025 07:35:26 +0000 (0:00:00.307) 0:00:59.426 ****** 2025-09-19 07:35:27.280612 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:35:27.280623 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:35:27.280634 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:35:27.280644 | orchestrator | 2025-09-19 07:35:27.280655 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:35:27.280667 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:35:27.280679 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:35:27.280690 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:35:27.280701 | orchestrator | 2025-09-19 07:35:27.280712 | orchestrator | 2025-09-19 07:35:27.280723 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:35:27.280733 | orchestrator | Friday 19 September 2025 07:35:26 +0000 (0:00:00.409) 0:00:59.835 ****** 2025-09-19 07:35:27.280744 | orchestrator | =============================================================================== 2025-09-19 07:35:27.280755 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 54.09s 2025-09-19 07:35:27.280803 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.10s 2025-09-19 07:35:27.280815 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-09-19 07:35:27.280826 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.53s 2025-09-19 07:35:27.280837 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2025-09-19 07:35:27.280865 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-09-19 07:35:27.280876 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-09-19 07:35:27.280887 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-19 07:35:27.541934 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-09-19 07:35:27.550734 | orchestrator | + set -e 2025-09-19 07:35:27.550837 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 07:35:27.550863 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 07:35:27.550875 | orchestrator | ++ INTERACTIVE=false 2025-09-19 07:35:27.550886 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 07:35:27.550896 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 07:35:27.550908 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 07:35:27.552311 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 07:35:27.558302 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 07:35:27.558348 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 07:35:27.558360 | orchestrator | + export OS_CLOUD=admin 2025-09-19 07:35:27.558371 | orchestrator | + OS_CLOUD=admin 2025-09-19 07:35:27.558382 | orchestrator | + echo 2025-09-19 07:35:27.558393 | orchestrator | 2025-09-19 07:35:27.558405 | orchestrator | # OpenStack endpoints 2025-09-19 07:35:27.558416 | orchestrator | 2025-09-19 07:35:27.558427 | orchestrator | + echo '# OpenStack endpoints' 2025-09-19 07:35:27.558438 | orchestrator | + echo 2025-09-19 07:35:27.558473 | orchestrator | + openstack endpoint list 2025-09-19 07:35:30.770882 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 07:35:30.770986 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-09-19 07:35:30.771001 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 07:35:30.771012 | orchestrator | | 037a870db6284855b0fd0acd002102ba | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-09-19 07:35:30.771024 | orchestrator | | 1673b3f0febc4df3aeddef1a9fc52721 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-09-19 07:35:30.771034 | orchestrator | | 1cc949d3d61648d0a65de5d6293ecaf6 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-09-19 07:35:30.771045 | orchestrator | | 2edcc14dae6e415eb35675a3f26327c9 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-19 07:35:30.771056 | orchestrator | | 3990cd7fa7d540a88002ae5fcf529ece | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-09-19 07:35:30.771084 | orchestrator | | 3f5bc4de4d6544a1b51001052f35eaf9 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-09-19 07:35:30.771095 | orchestrator | | 464831979acd4d9a9b07ff8f3ca278fc | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-09-19 07:35:30.771106 | orchestrator | | 603db0e72a384929b0ea09421a3df761 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-09-19 07:35:30.771117 | orchestrator | | 71add2d0dd814774bcc0f322186b4191 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-19 07:35:30.771127 | orchestrator | | 781fd0195d4645e9ac656fbe627022dd | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-09-19 07:35:30.771138 | orchestrator | | 8059ee51165941f897d83064c8b4dd34 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-09-19 07:35:30.771149 | orchestrator | | 9d48a1087c14467289beb67c9a31e7e8 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-09-19 07:35:30.771159 | orchestrator | | a30ac779e6cc472caa2fafefd31df038 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-09-19 07:35:30.771170 | orchestrator | | a69a17eafb50473c8025450bfd1cd7b1 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-09-19 07:35:30.771181 | orchestrator | | c365c0c8932b4d149adbbc2b44557dd7 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-09-19 07:35:30.771192 | orchestrator | | c5327fb326214db6b1043ed6b44ba676 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-19 07:35:30.771202 | orchestrator | | c9775d327c604d469c0bd9c3546843bb | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-09-19 07:35:30.771234 | orchestrator | | cc1a3d8a06744cbeafd73a23b3d42626 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-19 07:35:30.771245 | orchestrator | | df6ce9b5b91c47de9945752b0899b75f | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-09-19 07:35:30.771255 | orchestrator | | f205623004c44421b94bbf0806ada92b | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-09-19 07:35:30.771283 | orchestrator | | f3fc3fbcbcfc4267a6a3ec9106446def | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-09-19 07:35:30.771294 | orchestrator | | fee61b65ff594c289b8b77456783e8d7 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-09-19 07:35:30.771305 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 07:35:31.022267 | orchestrator | 2025-09-19 07:35:31.022387 | orchestrator | # Cinder 2025-09-19 07:35:31.022402 | orchestrator | 2025-09-19 07:35:31.022414 | orchestrator | + echo 2025-09-19 07:35:31.022426 | orchestrator | + echo '# Cinder' 2025-09-19 07:35:31.022437 | orchestrator | + echo 2025-09-19 07:35:31.022447 | orchestrator | + openstack volume service list 2025-09-19 07:35:34.206528 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 07:35:34.206633 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-09-19 07:35:34.206649 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 07:35:34.206661 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-19T07:35:25.000000 | 2025-09-19 07:35:34.206672 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-19T07:35:27.000000 | 2025-09-19 07:35:34.206683 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-19T07:35:27.000000 | 2025-09-19 07:35:34.206694 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-09-19T07:35:30.000000 | 2025-09-19 07:35:34.206704 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-09-19T07:35:30.000000 | 2025-09-19 07:35:34.206734 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-09-19T07:35:30.000000 | 2025-09-19 07:35:34.206745 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-09-19T07:35:29.000000 | 2025-09-19 07:35:34.206756 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-09-19T07:35:29.000000 | 2025-09-19 07:35:34.206863 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-09-19T07:35:30.000000 | 2025-09-19 07:35:34.206879 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 07:35:34.360052 | orchestrator | 2025-09-19 07:35:34.360135 | orchestrator | # Neutron 2025-09-19 07:35:34.360146 | orchestrator | 2025-09-19 07:35:34.360155 | orchestrator | + echo 2025-09-19 07:35:34.360164 | orchestrator | + echo '# Neutron' 2025-09-19 07:35:34.360174 | orchestrator | + echo 2025-09-19 07:35:34.360183 | orchestrator | + openstack network agent list 2025-09-19 07:35:36.981021 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 07:35:36.981125 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-09-19 07:35:36.981139 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 07:35:36.981178 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-09-19 07:35:36.981190 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-09-19 07:35:36.981201 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-09-19 07:35:36.981212 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-09-19 07:35:36.981223 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-09-19 07:35:36.981233 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-09-19 07:35:36.981244 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 07:35:36.981254 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 07:35:36.981265 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 07:35:36.981276 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 07:35:37.249114 | orchestrator | + openstack network service provider list 2025-09-19 07:35:39.812232 | orchestrator | +---------------+------+---------+ 2025-09-19 07:35:39.812337 | orchestrator | | Service Type | Name | Default | 2025-09-19 07:35:39.812351 | orchestrator | +---------------+------+---------+ 2025-09-19 07:35:39.812362 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-09-19 07:35:39.812373 | orchestrator | +---------------+------+---------+ 2025-09-19 07:35:40.083764 | orchestrator | 2025-09-19 07:35:40.083924 | orchestrator | # Nova 2025-09-19 07:35:40.083940 | orchestrator | 2025-09-19 07:35:40.083951 | orchestrator | + echo 2025-09-19 07:35:40.083962 | orchestrator | + echo '# Nova' 2025-09-19 07:35:40.083976 | orchestrator | + echo 2025-09-19 07:35:40.083999 | orchestrator | + openstack compute service list 2025-09-19 07:35:42.888632 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 07:35:42.888730 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-09-19 07:35:42.888745 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 07:35:42.888757 | orchestrator | | 7cee0cb7-4086-4739-8120-ecc79229f27e | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-19T07:35:39.000000 | 2025-09-19 07:35:42.888768 | orchestrator | | d39eb41e-6ae2-4b42-95ab-f39b7feab585 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-19T07:35:32.000000 | 2025-09-19 07:35:42.888827 | orchestrator | | 5cb6ea6a-8f1a-4190-913f-4aec4f124e54 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-19T07:35:34.000000 | 2025-09-19 07:35:42.888839 | orchestrator | | aca1a1cb-adc6-46ab-99df-ebc73c6b58eb | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-09-19T07:35:37.000000 | 2025-09-19 07:35:42.888850 | orchestrator | | cf8ab711-7dda-4101-91e6-a7279680e6fe | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-09-19T07:35:40.000000 | 2025-09-19 07:35:42.888861 | orchestrator | | 328ef652-e735-4461-b05b-b6a80eed1431 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-09-19T07:35:41.000000 | 2025-09-19 07:35:42.888894 | orchestrator | | 0126b121-2385-4fb5-8f2b-5bf673d27b33 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-09-19T07:35:42.000000 | 2025-09-19 07:35:42.888929 | orchestrator | | 90c49111-bd0b-42bc-8a92-8a850631582d | nova-compute | testbed-node-4 | nova | enabled | up | 2025-09-19T07:35:42.000000 | 2025-09-19 07:35:42.888940 | orchestrator | | e46dc75a-332f-4cff-a6f5-849be6d38912 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-09-19T07:35:42.000000 | 2025-09-19 07:35:42.888951 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 07:35:43.182163 | orchestrator | + openstack hypervisor list 2025-09-19 07:35:47.600265 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 07:35:47.600380 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-09-19 07:35:47.600393 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 07:35:47.600404 | orchestrator | | 5fcd8644-0982-4790-a34d-8349924eb9f0 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-09-19 07:35:47.600413 | orchestrator | | 8d3b1176-d493-41a9-870d-7812ea0a3be9 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-09-19 07:35:47.600423 | orchestrator | | f9e92254-eb89-4059-b4bc-3c9e96d316a4 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-09-19 07:35:47.600433 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 07:35:47.852093 | orchestrator | 2025-09-19 07:35:47.852158 | orchestrator | # Run OpenStack test play 2025-09-19 07:35:47.852165 | orchestrator | 2025-09-19 07:35:47.852170 | orchestrator | + echo 2025-09-19 07:35:47.852175 | orchestrator | + echo '# Run OpenStack test play' 2025-09-19 07:35:47.852180 | orchestrator | + echo 2025-09-19 07:35:47.852184 | orchestrator | + osism apply --environment openstack test 2025-09-19 07:35:49.640953 | orchestrator | 2025-09-19 07:35:49 | INFO  | Trying to run play test in environment openstack 2025-09-19 07:35:49.718873 | orchestrator | 2025-09-19 07:35:49 | INFO  | Task 1463d31d-f33c-48fc-9a7f-88128e471512 (test) was prepared for execution. 2025-09-19 07:35:49.718981 | orchestrator | 2025-09-19 07:35:49 | INFO  | It takes a moment until task 1463d31d-f33c-48fc-9a7f-88128e471512 (test) has been started and output is visible here. 2025-09-19 07:37:27.008812 | orchestrator | 2025-09-19 07:37:27 | INFO  | Trying to run play test in environment openstack 2025-09-19 07:37:27.010634 | orchestrator | 2025-09-19 07:37:27 | INFO  | Task 38b58d97-f697-4407-834f-290e044e90d2 (test) was prepared for execution. 2025-09-19 07:37:27.010670 | orchestrator | 2025-09-19 07:37:27 | INFO  | It takes a moment until task 38b58d97-f697-4407-834f-290e044e90d2 (test) has been started and output is visible here. 2025-09-19 07:38:23.050371 | orchestrator | 2025-09-19 07:38:23.050482 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-19 07:38:23.050497 | orchestrator | 2025-09-19 07:38:23.050509 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-19 07:38:23.050521 | orchestrator | Friday 19 September 2025 07:35:53 +0000 (0:00:00.077) 0:00:00.077 ****** 2025-09-19 07:38:23.050532 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.050544 | orchestrator | 2025-09-19 07:38:23.050555 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-19 07:38:23.050566 | orchestrator | Friday 19 September 2025 07:35:57 +0000 (0:00:03.619) 0:00:03.696 ****** 2025-09-19 07:38:23.050577 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.050588 | orchestrator | 2025-09-19 07:38:23.050599 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-19 07:38:23.050610 | orchestrator | Friday 19 September 2025 07:36:01 +0000 (0:00:03.997) 0:00:07.694 ****** 2025-09-19 07:38:23.050620 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.050631 | orchestrator | 2025-09-19 07:38:23.050642 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-19 07:38:23.050653 | orchestrator | Friday 19 September 2025 07:36:07 +0000 (0:00:06.243) 0:00:13.938 ****** 2025-09-19 07:38:23.050687 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.050698 | orchestrator | 2025-09-19 07:38:23.050709 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-19 07:38:23.050720 | orchestrator | Friday 19 September 2025 07:36:11 +0000 (0:00:03.982) 0:00:17.920 ****** 2025-09-19 07:38:23.050731 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.050741 | orchestrator | 2025-09-19 07:38:23.050752 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-19 07:38:23.050763 | orchestrator | Friday 19 September 2025 07:36:15 +0000 (0:00:04.207) 0:00:22.128 ****** 2025-09-19 07:38:23.050774 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-09-19 07:38:23.050785 | orchestrator | changed: [localhost] => (item=member) 2025-09-19 07:38:23.050797 | orchestrator | changed: [localhost] => (item=creator) 2025-09-19 07:38:23.050808 | orchestrator | 2025-09-19 07:38:23.050819 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-19 07:38:23.050830 | orchestrator | Friday 19 September 2025 07:36:27 +0000 (0:00:12.080) 0:00:34.209 ****** 2025-09-19 07:38:23.050840 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.050851 | orchestrator | 2025-09-19 07:38:23.050862 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-19 07:38:23.050873 | orchestrator | Friday 19 September 2025 07:36:31 +0000 (0:00:04.169) 0:00:38.378 ****** 2025-09-19 07:38:23.050883 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.050894 | orchestrator | 2025-09-19 07:38:23.050905 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-19 07:38:23.050916 | orchestrator | Friday 19 September 2025 07:36:36 +0000 (0:00:04.634) 0:00:43.013 ****** 2025-09-19 07:38:23.050927 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.050937 | orchestrator | 2025-09-19 07:38:23.050948 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-19 07:38:23.050959 | orchestrator | Friday 19 September 2025 07:36:40 +0000 (0:00:04.180) 0:00:47.193 ****** 2025-09-19 07:38:23.050994 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.051006 | orchestrator | 2025-09-19 07:38:23.051017 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-19 07:38:23.051028 | orchestrator | Friday 19 September 2025 07:36:44 +0000 (0:00:03.497) 0:00:50.691 ****** 2025-09-19 07:38:23.051038 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.051049 | orchestrator | 2025-09-19 07:38:23.051060 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-19 07:38:23.051071 | orchestrator | Friday 19 September 2025 07:36:48 +0000 (0:00:03.745) 0:00:54.436 ****** 2025-09-19 07:38:23.051082 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.051093 | orchestrator | 2025-09-19 07:38:23.051103 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-19 07:38:23.051114 | orchestrator | Friday 19 September 2025 07:36:51 +0000 (0:00:03.489) 0:00:57.926 ****** 2025-09-19 07:38:23.051125 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.051135 | orchestrator | 2025-09-19 07:38:23.051146 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-19 07:38:23.051157 | orchestrator | Friday 19 September 2025 07:37:05 +0000 (0:00:14.046) 0:01:11.972 ****** 2025-09-19 07:38:23.051170 | orchestrator | failed: [localhost] (item=test) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 07:38:23.051196 | orchestrator | failed: [localhost] (item=test-1) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-1", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 07:38:23.051207 | orchestrator | failed: [localhost] (item=test-2) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-2", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 07:38:23.051226 | orchestrator | failed: [localhost] (item=test-3) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-3", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 07:38:23.051237 | orchestrator | failed: [localhost] (item=test-4) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-4", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 07:38:23.051248 | orchestrator | 2025-09-19 07:38:23.051259 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:38:23.051285 | orchestrator | localhost : ok=13  changed=13  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-19 07:38:23.051297 | orchestrator | 2025-09-19 07:38:23.051308 | orchestrator | 2025-09-19 07:38:23.051319 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:38:23.051330 | orchestrator | Friday 19 September 2025 07:37:26 +0000 (0:00:21.287) 0:01:33.259 ****** 2025-09-19 07:38:23.051340 | orchestrator | =============================================================================== 2025-09-19 07:38:23.051351 | orchestrator | Create test instances -------------------------------------------------- 21.29s 2025-09-19 07:38:23.051362 | orchestrator | Create test network topology ------------------------------------------- 14.05s 2025-09-19 07:38:23.051372 | orchestrator | Add member roles to user test ------------------------------------------ 12.08s 2025-09-19 07:38:23.051383 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.24s 2025-09-19 07:38:23.051393 | orchestrator | Create ssh security group ----------------------------------------------- 4.63s 2025-09-19 07:38:23.051404 | orchestrator | Create test user -------------------------------------------------------- 4.21s 2025-09-19 07:38:23.051415 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.18s 2025-09-19 07:38:23.051430 | orchestrator | Create test server group ------------------------------------------------ 4.17s 2025-09-19 07:38:23.051441 | orchestrator | Create test-admin user -------------------------------------------------- 4.00s 2025-09-19 07:38:23.051452 | orchestrator | Create test project ----------------------------------------------------- 3.98s 2025-09-19 07:38:23.051463 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.75s 2025-09-19 07:38:23.051473 | orchestrator | Create test domain ------------------------------------------------------ 3.62s 2025-09-19 07:38:23.051484 | orchestrator | Create icmp security group ---------------------------------------------- 3.50s 2025-09-19 07:38:23.051495 | orchestrator | Create test keypair ----------------------------------------------------- 3.49s 2025-09-19 07:38:23.051505 | orchestrator | 2025-09-19 07:38:23.051516 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-19 07:38:23.051526 | orchestrator | 2025-09-19 07:38:23.051548 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-19 07:38:23.051560 | orchestrator | Friday 19 September 2025 07:37:30 +0000 (0:00:00.069) 0:00:00.069 ****** 2025-09-19 07:38:23.051571 | orchestrator | ok: [localhost] 2025-09-19 07:38:23.051582 | orchestrator | 2025-09-19 07:38:23.051593 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-19 07:38:23.051608 | orchestrator | Friday 19 September 2025 07:37:33 +0000 (0:00:03.422) 0:00:03.492 ****** 2025-09-19 07:38:23.051619 | orchestrator | ok: [localhost] 2025-09-19 07:38:23.051630 | orchestrator | 2025-09-19 07:38:23.051641 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-19 07:38:23.051651 | orchestrator | Friday 19 September 2025 07:37:37 +0000 (0:00:03.318) 0:00:06.811 ****** 2025-09-19 07:38:23.051662 | orchestrator | changed: [localhost] 2025-09-19 07:38:23.051673 | orchestrator | 2025-09-19 07:38:23.051684 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-19 07:38:23.051695 | orchestrator | Friday 19 September 2025 07:37:43 +0000 (0:00:05.768) 0:00:12.579 ****** 2025-09-19 07:38:23.051712 | orchestrator | ok: [localhost] 2025-09-19 07:38:23.051723 | orchestrator | 2025-09-19 07:38:23.051734 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-19 07:38:23.051744 | orchestrator | Friday 19 September 2025 07:37:46 +0000 (0:00:03.552) 0:00:16.131 ****** 2025-09-19 07:38:23.051755 | orchestrator | ok: [localhost] 2025-09-19 07:38:23.051766 | orchestrator | 2025-09-19 07:38:23.051777 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-19 07:38:23.051787 | orchestrator | Friday 19 September 2025 07:37:50 +0000 (0:00:03.648) 0:00:19.780 ****** 2025-09-19 07:38:23.051798 | orchestrator | ok: [localhost] => (item=load-balancer_member) 2025-09-19 07:38:23.051809 | orchestrator | ok: [localhost] => (item=member) 2025-09-19 07:38:23.051820 | orchestrator | ok: [localhost] => (item=creator) 2025-09-19 07:38:23.051831 | orchestrator | 2025-09-19 07:38:23.051841 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-19 07:38:23.051852 | orchestrator | Friday 19 September 2025 07:38:01 +0000 (0:00:10.877) 0:00:30.658 ****** 2025-09-19 07:38:23.051863 | orchestrator | ok: [localhost] 2025-09-19 07:38:23.051874 | orchestrator | 2025-09-19 07:38:23.051884 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-19 07:38:23.051895 | orchestrator | Friday 19 September 2025 07:38:04 +0000 (0:00:03.838) 0:00:34.496 ****** 2025-09-19 07:38:23.051906 | orchestrator | ok: [localhost] 2025-09-19 07:38:23.051916 | orchestrator | 2025-09-19 07:38:23.051984 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-19 07:38:23.051998 | orchestrator | Friday 19 September 2025 07:38:08 +0000 (0:00:03.772) 0:00:38.269 ****** 2025-09-19 07:38:23.052009 | orchestrator | ok: [localhost] 2025-09-19 07:38:23.052019 | orchestrator | 2025-09-19 07:38:23.052030 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-19 07:38:23.052041 | orchestrator | Friday 19 September 2025 07:38:12 +0000 (0:00:03.804) 0:00:42.073 ****** 2025-09-19 07:38:23.052052 | orchestrator | ok: [localhost] 2025-09-19 07:38:23.052062 | orchestrator | 2025-09-19 07:38:23.052073 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-19 07:38:23.052084 | orchestrator | Friday 19 September 2025 07:38:16 +0000 (0:00:03.500) 0:00:45.574 ****** 2025-09-19 07:38:23.052094 | orchestrator | ok: [localhost] 2025-09-19 07:38:23.052105 | orchestrator | 2025-09-19 07:38:23.052116 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-19 07:38:23.052126 | orchestrator | Friday 19 September 2025 07:38:19 +0000 (0:00:03.701) 0:00:49.276 ****** 2025-09-19 07:38:23.052137 | orchestrator | ok: [localhost] 2025-09-19 07:38:23.052148 | orchestrator | 2025-09-19 07:38:23.052158 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-19 07:38:23.052175 | orchestrator | Friday 19 September 2025 07:38:23 +0000 (0:00:03.280) 0:00:52.557 ****** 2025-09-19 07:38:49.684319 | orchestrator | changed: [localhost] 2025-09-19 07:38:49.684434 | orchestrator | 2025-09-19 07:38:49.684451 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-19 07:38:49.684465 | orchestrator | Friday 19 September 2025 07:38:28 +0000 (0:00:05.754) 0:00:58.311 ****** 2025-09-19 07:38:49.684478 | orchestrator | failed: [localhost] (item=test) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 07:38:49.684493 | orchestrator | failed: [localhost] (item=test-1) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-1", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 07:38:49.684504 | orchestrator | failed: [localhost] (item=test-2) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-2", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 07:38:49.684515 | orchestrator | failed: [localhost] (item=test-3) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-3", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 07:38:49.684549 | orchestrator | failed: [localhost] (item=test-4) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-4", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 07:38:49.684560 | orchestrator | 2025-09-19 07:38:49.684571 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:38:49.684583 | orchestrator | localhost : ok=13  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-19 07:38:49.684595 | orchestrator | 2025-09-19 07:38:49.684606 | orchestrator | 2025-09-19 07:38:49.684617 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:38:49.684628 | orchestrator | Friday 19 September 2025 07:38:49 +0000 (0:00:20.655) 0:01:18.967 ****** 2025-09-19 07:38:49.684639 | orchestrator | =============================================================================== 2025-09-19 07:38:49.684665 | orchestrator | Create test instances -------------------------------------------------- 20.66s 2025-09-19 07:38:49.684676 | orchestrator | Add member roles to user test ------------------------------------------ 10.88s 2025-09-19 07:38:49.684687 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.77s 2025-09-19 07:38:49.684698 | orchestrator | Create test network topology -------------------------------------------- 5.75s 2025-09-19 07:38:49.684709 | orchestrator | Create test server group ------------------------------------------------ 3.84s 2025-09-19 07:38:49.684719 | orchestrator | Add rule to ssh security group ------------------------------------------ 3.80s 2025-09-19 07:38:49.684730 | orchestrator | Create ssh security group ----------------------------------------------- 3.77s 2025-09-19 07:38:49.684741 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.70s 2025-09-19 07:38:49.684752 | orchestrator | Create test user -------------------------------------------------------- 3.65s 2025-09-19 07:38:49.684763 | orchestrator | Create test project ----------------------------------------------------- 3.55s 2025-09-19 07:38:49.684773 | orchestrator | Create icmp security group ---------------------------------------------- 3.50s 2025-09-19 07:38:49.684784 | orchestrator | Create test domain ------------------------------------------------------ 3.42s 2025-09-19 07:38:49.684795 | orchestrator | Create test-admin user -------------------------------------------------- 3.32s 2025-09-19 07:38:49.684806 | orchestrator | Create test keypair ----------------------------------------------------- 3.28s 2025-09-19 07:38:50.269509 | orchestrator | ERROR 2025-09-19 07:38:50.270034 | orchestrator | { 2025-09-19 07:38:50.270145 | orchestrator | "delta": "0:07:26.295848", 2025-09-19 07:38:50.270209 | orchestrator | "end": "2025-09-19 07:38:49.984868", 2025-09-19 07:38:50.270264 | orchestrator | "msg": "non-zero return code", 2025-09-19 07:38:50.270316 | orchestrator | "rc": 2, 2025-09-19 07:38:50.270364 | orchestrator | "start": "2025-09-19 07:31:23.689020" 2025-09-19 07:38:50.270410 | orchestrator | } failure 2025-09-19 07:38:50.321583 | 2025-09-19 07:38:50.321793 | PLAY RECAP 2025-09-19 07:38:50.321937 | orchestrator | ok: 23 changed: 10 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-19 07:38:50.322029 | 2025-09-19 07:38:50.557241 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-19 07:38:50.559564 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 07:38:51.296482 | 2025-09-19 07:38:51.296649 | PLAY [Post output play] 2025-09-19 07:38:51.313863 | 2025-09-19 07:38:51.313999 | LOOP [stage-output : Register sources] 2025-09-19 07:38:51.381049 | 2025-09-19 07:38:51.381319 | TASK [stage-output : Check sudo] 2025-09-19 07:38:52.184476 | orchestrator | sudo: a password is required 2025-09-19 07:38:52.420177 | orchestrator | ok: Runtime: 0:00:00.016254 2025-09-19 07:38:52.434635 | 2025-09-19 07:38:52.434811 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-19 07:38:52.471560 | 2025-09-19 07:38:52.471972 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-19 07:38:52.540727 | orchestrator | ok 2025-09-19 07:38:52.549338 | 2025-09-19 07:38:52.549466 | LOOP [stage-output : Ensure target folders exist] 2025-09-19 07:38:52.999903 | orchestrator | ok: "docs" 2025-09-19 07:38:53.000234 | 2025-09-19 07:38:53.247051 | orchestrator | ok: "artifacts" 2025-09-19 07:38:53.504710 | orchestrator | ok: "logs" 2025-09-19 07:38:53.523659 | 2025-09-19 07:38:53.523866 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-19 07:38:53.565334 | 2025-09-19 07:38:53.565611 | TASK [stage-output : Make all log files readable] 2025-09-19 07:38:53.848195 | orchestrator | ok 2025-09-19 07:38:53.856922 | 2025-09-19 07:38:53.857064 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-19 07:38:53.892201 | orchestrator | skipping: Conditional result was False 2025-09-19 07:38:53.906330 | 2025-09-19 07:38:53.906480 | TASK [stage-output : Discover log files for compression] 2025-09-19 07:38:53.932042 | orchestrator | skipping: Conditional result was False 2025-09-19 07:38:53.945324 | 2025-09-19 07:38:53.945474 | LOOP [stage-output : Archive everything from logs] 2025-09-19 07:38:53.989674 | 2025-09-19 07:38:53.989915 | PLAY [Post cleanup play] 2025-09-19 07:38:53.999913 | 2025-09-19 07:38:54.000027 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 07:38:54.057422 | orchestrator | ok 2025-09-19 07:38:54.072810 | 2025-09-19 07:38:54.072987 | TASK [Set cloud fact (local deployment)] 2025-09-19 07:38:54.098457 | orchestrator | skipping: Conditional result was False 2025-09-19 07:38:54.113391 | 2025-09-19 07:38:54.113572 | TASK [Clean the cloud environment] 2025-09-19 07:38:54.694717 | orchestrator | 2025-09-19 07:38:54 - clean up servers 2025-09-19 07:38:55.445457 | orchestrator | 2025-09-19 07:38:55 - testbed-manager 2025-09-19 07:38:55.533785 | orchestrator | 2025-09-19 07:38:55 - testbed-node-0 2025-09-19 07:38:55.621237 | orchestrator | 2025-09-19 07:38:55 - testbed-node-2 2025-09-19 07:38:55.708238 | orchestrator | 2025-09-19 07:38:55 - testbed-node-1 2025-09-19 07:38:55.803616 | orchestrator | 2025-09-19 07:38:55 - testbed-node-4 2025-09-19 07:38:55.894050 | orchestrator | 2025-09-19 07:38:55 - testbed-node-5 2025-09-19 07:38:55.980634 | orchestrator | 2025-09-19 07:38:55 - testbed-node-3 2025-09-19 07:38:56.084059 | orchestrator | 2025-09-19 07:38:56 - clean up keypairs 2025-09-19 07:38:56.107194 | orchestrator | 2025-09-19 07:38:56 - testbed 2025-09-19 07:38:56.140758 | orchestrator | 2025-09-19 07:38:56 - wait for servers to be gone 2025-09-19 07:39:07.015318 | orchestrator | 2025-09-19 07:39:07 - clean up ports 2025-09-19 07:39:07.185811 | orchestrator | 2025-09-19 07:39:07 - 00fa1cb8-449f-4197-8bed-d2a8f258440a 2025-09-19 07:39:07.419392 | orchestrator | 2025-09-19 07:39:07 - 570fcc7b-19a1-431e-b080-128e1c1b9d4d 2025-09-19 07:39:07.887432 | orchestrator | 2025-09-19 07:39:07 - 65ac5ee0-d35d-49d7-900c-4033da22ffec 2025-09-19 07:39:08.181430 | orchestrator | 2025-09-19 07:39:08 - b8cdcc13-c04c-49a7-a6e0-ebb04a13a240 2025-09-19 07:39:08.386865 | orchestrator | 2025-09-19 07:39:08 - c9f29102-eec2-4d59-96b9-ec78f2aa81f4 2025-09-19 07:39:08.589638 | orchestrator | 2025-09-19 07:39:08 - f3872cc6-53df-4d76-8ec8-7b9a9921c797 2025-09-19 07:39:08.790971 | orchestrator | 2025-09-19 07:39:08 - f5a7e1a7-a77f-4ef9-b0d9-8348e8942dff 2025-09-19 07:39:08.996525 | orchestrator | 2025-09-19 07:39:08 - clean up volumes 2025-09-19 07:39:09.098173 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-1-node-base 2025-09-19 07:39:09.136354 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-0-node-base 2025-09-19 07:39:09.178818 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-2-node-base 2025-09-19 07:39:09.217653 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-4-node-base 2025-09-19 07:39:09.264890 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-3-node-base 2025-09-19 07:39:09.309886 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-5-node-base 2025-09-19 07:39:09.351444 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-7-node-4 2025-09-19 07:39:09.394811 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-4-node-4 2025-09-19 07:39:09.441475 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-manager-base 2025-09-19 07:39:09.485468 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-8-node-5 2025-09-19 07:39:09.528370 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-5-node-5 2025-09-19 07:39:09.574343 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-0-node-3 2025-09-19 07:39:09.620162 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-3-node-3 2025-09-19 07:39:09.666937 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-2-node-5 2025-09-19 07:39:09.707553 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-6-node-3 2025-09-19 07:39:09.746674 | orchestrator | 2025-09-19 07:39:09 - testbed-volume-1-node-4 2025-09-19 07:39:09.788154 | orchestrator | 2025-09-19 07:39:09 - disconnect routers 2025-09-19 07:39:09.850648 | orchestrator | 2025-09-19 07:39:09 - testbed 2025-09-19 07:39:11.162229 | orchestrator | 2025-09-19 07:39:11 - clean up subnets 2025-09-19 07:39:11.201769 | orchestrator | 2025-09-19 07:39:11 - subnet-testbed-management 2025-09-19 07:39:11.357824 | orchestrator | 2025-09-19 07:39:11 - clean up networks 2025-09-19 07:39:11.500296 | orchestrator | 2025-09-19 07:39:11 - net-testbed-management 2025-09-19 07:39:11.831281 | orchestrator | 2025-09-19 07:39:11 - clean up security groups 2025-09-19 07:39:11.869565 | orchestrator | 2025-09-19 07:39:11 - testbed-node 2025-09-19 07:39:11.977282 | orchestrator | 2025-09-19 07:39:11 - testbed-management 2025-09-19 07:39:12.090416 | orchestrator | 2025-09-19 07:39:12 - clean up floating ips 2025-09-19 07:39:12.128311 | orchestrator | 2025-09-19 07:39:12 - 81.163.193.189 2025-09-19 07:39:12.574160 | orchestrator | 2025-09-19 07:39:12 - clean up routers 2025-09-19 07:39:12.694880 | orchestrator | 2025-09-19 07:39:12 - testbed 2025-09-19 07:39:14.170817 | orchestrator | ok: Runtime: 0:00:19.599240 2025-09-19 07:39:14.173105 | 2025-09-19 07:39:14.173204 | PLAY RECAP 2025-09-19 07:39:14.173264 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-19 07:39:14.173289 | 2025-09-19 07:39:14.306740 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 07:39:14.307861 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 07:39:15.028273 | 2025-09-19 07:39:15.028429 | PLAY [Cleanup play] 2025-09-19 07:39:15.044397 | 2025-09-19 07:39:15.044532 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 07:39:15.086320 | orchestrator | ok 2025-09-19 07:39:15.093394 | 2025-09-19 07:39:15.093514 | TASK [Set cloud fact (local deployment)] 2025-09-19 07:39:15.127239 | orchestrator | skipping: Conditional result was False 2025-09-19 07:39:15.136968 | 2025-09-19 07:39:15.137074 | TASK [Clean the cloud environment] 2025-09-19 07:39:16.244913 | orchestrator | 2025-09-19 07:39:16 - clean up servers 2025-09-19 07:39:16.724320 | orchestrator | 2025-09-19 07:39:16 - clean up keypairs 2025-09-19 07:39:16.742399 | orchestrator | 2025-09-19 07:39:16 - wait for servers to be gone 2025-09-19 07:39:16.785716 | orchestrator | 2025-09-19 07:39:16 - clean up ports 2025-09-19 07:39:16.856449 | orchestrator | 2025-09-19 07:39:16 - clean up volumes 2025-09-19 07:39:16.918210 | orchestrator | 2025-09-19 07:39:16 - disconnect routers 2025-09-19 07:39:16.940216 | orchestrator | 2025-09-19 07:39:16 - clean up subnets 2025-09-19 07:39:16.964714 | orchestrator | 2025-09-19 07:39:16 - clean up networks 2025-09-19 07:39:17.124990 | orchestrator | 2025-09-19 07:39:17 - clean up security groups 2025-09-19 07:39:17.158625 | orchestrator | 2025-09-19 07:39:17 - clean up floating ips 2025-09-19 07:39:17.185259 | orchestrator | 2025-09-19 07:39:17 - clean up routers 2025-09-19 07:39:17.675156 | orchestrator | ok: Runtime: 0:00:01.317669 2025-09-19 07:39:17.677652 | 2025-09-19 07:39:17.677752 | PLAY RECAP 2025-09-19 07:39:17.677845 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-19 07:39:17.677881 | 2025-09-19 07:39:17.794817 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 07:39:17.797302 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 07:39:18.551568 | 2025-09-19 07:39:18.551720 | PLAY [Base post-fetch] 2025-09-19 07:39:18.566985 | 2025-09-19 07:39:18.567115 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-19 07:39:18.633360 | orchestrator | skipping: Conditional result was False 2025-09-19 07:39:18.648063 | 2025-09-19 07:39:18.648267 | TASK [fetch-output : Set log path for single node] 2025-09-19 07:39:18.707674 | orchestrator | ok 2025-09-19 07:39:18.716755 | 2025-09-19 07:39:18.716924 | LOOP [fetch-output : Ensure local output dirs] 2025-09-19 07:39:19.209017 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/f4c728bda45d4a6b95911456e6e30ad1/work/logs" 2025-09-19 07:39:19.481036 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f4c728bda45d4a6b95911456e6e30ad1/work/artifacts" 2025-09-19 07:39:19.754259 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f4c728bda45d4a6b95911456e6e30ad1/work/docs" 2025-09-19 07:39:19.781243 | 2025-09-19 07:39:19.781433 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-19 07:39:20.729122 | orchestrator | changed: .d..t...... ./ 2025-09-19 07:39:20.729509 | orchestrator | changed: All items complete 2025-09-19 07:39:20.729573 | 2025-09-19 07:39:21.472647 | orchestrator | changed: .d..t...... ./ 2025-09-19 07:39:22.201531 | orchestrator | changed: .d..t...... ./ 2025-09-19 07:39:22.222674 | 2025-09-19 07:39:22.222863 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-19 07:39:22.251757 | orchestrator | skipping: Conditional result was False 2025-09-19 07:39:22.259241 | orchestrator | skipping: Conditional result was False 2025-09-19 07:39:22.276569 | 2025-09-19 07:39:22.276681 | PLAY RECAP 2025-09-19 07:39:22.276735 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-19 07:39:22.276777 | 2025-09-19 07:39:22.398964 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 07:39:22.400053 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 07:39:23.127872 | 2025-09-19 07:39:23.128033 | PLAY [Base post] 2025-09-19 07:39:23.142360 | 2025-09-19 07:39:23.142488 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-19 07:39:24.087499 | orchestrator | changed 2025-09-19 07:39:24.099040 | 2025-09-19 07:39:24.099191 | PLAY RECAP 2025-09-19 07:39:24.099273 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-19 07:39:24.099355 | 2025-09-19 07:39:24.219697 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 07:39:24.220801 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-19 07:39:24.993712 | 2025-09-19 07:39:24.993893 | PLAY [Base post-logs] 2025-09-19 07:39:25.004231 | 2025-09-19 07:39:25.004365 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-19 07:39:25.460507 | localhost | changed 2025-09-19 07:39:25.475923 | 2025-09-19 07:39:25.476095 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-19 07:39:25.513296 | localhost | ok 2025-09-19 07:39:25.518207 | 2025-09-19 07:39:25.518354 | TASK [Set zuul-log-path fact] 2025-09-19 07:39:25.536357 | localhost | ok 2025-09-19 07:39:25.549679 | 2025-09-19 07:39:25.549873 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 07:39:25.586064 | localhost | ok 2025-09-19 07:39:25.590914 | 2025-09-19 07:39:25.591069 | TASK [upload-logs : Create log directories] 2025-09-19 07:39:26.090463 | localhost | changed 2025-09-19 07:39:26.096295 | 2025-09-19 07:39:26.096456 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-19 07:39:26.589366 | localhost -> localhost | ok: Runtime: 0:00:00.006933 2025-09-19 07:39:26.593572 | 2025-09-19 07:39:26.593681 | TASK [upload-logs : Upload logs to log server] 2025-09-19 07:39:27.166472 | localhost | Output suppressed because no_log was given 2025-09-19 07:39:27.168414 | 2025-09-19 07:39:27.168532 | LOOP [upload-logs : Compress console log and json output] 2025-09-19 07:39:27.218489 | localhost | skipping: Conditional result was False 2025-09-19 07:39:27.225743 | localhost | skipping: Conditional result was False 2025-09-19 07:39:27.242108 | 2025-09-19 07:39:27.242341 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-19 07:39:27.289084 | localhost | skipping: Conditional result was False 2025-09-19 07:39:27.289375 | 2025-09-19 07:39:27.302512 | localhost | skipping: Conditional result was False 2025-09-19 07:39:27.312260 | 2025-09-19 07:39:27.312365 | LOOP [upload-logs : Upload console log and json output]